From 5efd115e225bff9173af9dd12721ab6cce5ba181 Mon Sep 17 00:00:00 2001
From: awssdkgo Indicates whether Amazon Kendra indexes attachments to the pages and blogs in the Confluence data source. Specifies whether Amazon Kendra should index personal spaces. Users can add restrictions to items in personal spaces. If personal spaces are indexed, queries without user context information may return restricted items from a personal space in their results. For more information, see Filtering on user context. Specifies whether Amazon Kendra should index archived spaces. Indicates whether the response is the best response. True if this is the best response; otherwise, false. Indicates whether Amazon Kendra should index attachments to Salesforce objects. Indicates that the field can be used to create search facets, a count of results for each value in the field. The default is Specifies the attachment settings for the Confluence data source. Attachment settings are optional, if you don't specify settings attachments, Amazon Kendra won't index them. Specifies configuration information for indexing attachments to Confluence blogs and pages. Defines how attachment metadata fields should be mapped to index fields. Before you can map a field, you must first create an index field with a matching type using the console or the If you specify the The name of the field in the data source. You must first create the index field using the operation. Defines the mapping between a field in the Confluence data source to a Amazon Kendra index field. You must first create the index field using the operation. Specifies the blog settings for the Confluence data source. Blogs are always indexed unless filtered from the index by the Specifies configuration information for indexing Confluence blogs. Defines how blog metadata fields should be mapped to index fields. Before you can map a field, you must first create an index field with a matching type using the console or the If you specify the The name of the field in the data source. Defines the mapping between a blog field in the Confluence data source to a Amazon Kendra index field. You must first create the index field using the operation. Provides configuration information for data sources that connect to Confluence. Provides configuration information for connecting to a Confluence data source. Specifies the page settings for the Confluence data source. Specifies configuration information for indexing Confluence pages. Defines how page metadata fields should be mapped to index fields. Before you can map a field, you must first create an index field with a matching type using the console or the If you specify the The name of the field in the data source. Defines the mapping between a field in the Confluence data source to a Amazon Kendra index field. You must first create the index field using the operation. Specifies the configuration for indexing Confluence spaces. Specifies configuration information for indexing Confluence spaces. Defines how space metadata fields should be mapped to index fields. Before you can map a field, you must first create an index field with a matching type using the console or the If you specify the The name of the field in the data source. A list of space keys for Confluence spaces. If you include a key, the blogs, documents, and attachments in the space are indexed. Spaces that aren't in the list aren't indexed. A space in the list must exist. Otherwise, Amazon Kendra logs an error when the data source is synchronized. If a space is in both the A list of space keys of Confluence spaces. If you include a key, the blogs, documents, and attachments in the space are not indexed. If a space is in both the Defines the mapping between a field in the Confluence data source to a Amazon Kendra index field. You must first create the index field using the operation. Specifies the version of the Confluence installation that you are connecting to. Provides the information necessary to connect to a database. The format for date fields in the data source. If the field specified in The format for date fields in the data source. If the field specified in The format for date fields in the data source. If the field specified in The format for date fields in the data source. If the field specified in The type of data stored in the column or attribute. A list of regular expression patterns that apply to a URL on the Confluence server. An inclusion pattern can apply to a blog post, a page, a space, or an attachment. Items that match the patterns are included in the index. Items that don't match the pattern are excluded from the index. If an item matches both an inclusion pattern and an exclusion pattern, the item isn't included in the index. A list of regular expression patterns that apply to a URL on the Confluence server. An exclusion pattern can apply to a blog post, a page, a space, or an attachment. Items that match the pattern are excluded from the index. Items that don't match the pattern are included in the index. If a item matches both an exclusion pattern and an inclusion pattern, the item isn't included in the index. A list of regular expression patterns. Documents that match the pattern are included in the index. Documents that don't match the pattern are excluded from the index. If a document matches both an inclusion pattern and an exclusion pattern, the document is not included in the index. The exclusion pattern is applied to the file name. List of regular expressions applied to documents. Items that match the exclusion pattern are not indexed. If you provide both an inclusion pattern and an exclusion pattern, any item that matches the exclusion pattern isn't indexed. The exclusion pattern is applied to the file name. A list of S3 prefixes for the documents that should be included in the index. Provides information for connecting to an Amazon VPC. Specifies the information for connecting to an Amazon VPC. The name of the index field to map to the Confluence data source field. The index field type must match the Confluence field type. The name of the index field to map to the Confluence data source field. The index field type must match the Confluence field type. The name of the index field to map to the Confluence data source field. The index field type must match the Confluence field type. The name of the index field to map to the Confluence data source field. The index field type must match the Confluence field type. The name of the field in the index. The Amazon Resource Name (ARN) of an AWS Secrets Manager secret that contains the key/value pairs required to connect to your Confluence server. The secret must contain a JSON structure with the following keys: username - The user name of a user with administrative privileges for the Confluence server. password - The password associated with the user logging in to the Confluence server. The Amazon Resource Name (ARN) of credentials stored in AWS Secrets Manager. The credentials should be a user/password pair. For more information, see Using a Database Data Source. For more information about AWS Secrets Manager, see What Is AWS Secrets Manager in the AWS Secrets Manager user guide. The Amazon Resource Name (ARN) of an AWS Secrets Manager secret that contains the user name and password to connect to OneDrive. The user namd should be the application ID for the OneDrive application, and the password is the application key for the OneDrive application. The Amazon Resource Name (ARN) of an AWS Secrets Manager secret that contains the key/value pairs required to connect to your Salesforce instance. The secret must contain a JSON structure with the following keys: authenticationUrl - The OAUTH endpoint that Amazon Kendra connects to get an OAUTH token. consumerKey - The application public key generated when you created your Salesforce application. consumerSecret - The application private key generated when you created your Salesforce application. password - The password associated with the user logging in to the Salesforce instance. securityToken - The token associated with the user account logging in to the Salesforce instance. username - The user name of the user logging in to the Salesforce instance. The URL of your Confluence instance. Use the full URL of the server. For example, The URI of the original location of the document. The instance URL for the Salesforce site that you want to index. Copies a snapshot of a DB cluster. To copy a DB cluster snapshot from a shared manual DB cluster snapshot, Copies the specified DB parameter group. Creates a new Amazon Neptune DB cluster. You can use the Note that when you create a new cluster using Creates a new custom endpoint and associates it with an Amazon Neptune DB cluster. Creates a new DB cluster parameter group. Parameters in a DB cluster parameter group apply to all of the instances in a DB cluster. A DB cluster parameter group is initially created with the default parameters for the database engine used by instances in the DB cluster. To provide custom values for any of the parameters, you must modify the group after creating it using ModifyDBClusterParameterGroup. Once you've created a DB cluster parameter group, you need to associate it with your DB cluster using ModifyDBCluster. When you associate a new DB cluster parameter group with a running DB cluster, you need to reboot the DB instances in the DB cluster without failover for the new DB cluster parameter group and associated settings to take effect. After you create a DB cluster parameter group, you should wait at least 5 minutes before creating your first DB cluster that uses that DB cluster parameter group as the default parameter group. This allows Amazon Neptune to fully complete the create action before the DB cluster parameter group is used as the default for a new DB cluster. This is especially important for parameters that are critical when creating the default database for a DB cluster, such as the character set for the default database defined by the Creates a snapshot of a DB cluster. Creates a new DB instance. Creates a new DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region. Creates an event notification subscription. This action requires a topic ARN (Amazon Resource Name) created by either the Neptune console, the SNS console, or the SNS API. To obtain an ARN with SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console. You can specify the type of source (SourceType) you want to be notified of, provide a list of Neptune sources (SourceIds) that triggers the events, and provide a list of event categories (EventCategories) for events you want to be notified of. For example, you can specify SourceType = db-instance, SourceIds = mydbinstance1, mydbinstance2 and EventCategories = Availability, Backup. If you specify both the SourceType and SourceIds, such as SourceType = db-instance and SourceIdentifier = myDBInstance1, you are notified of all the db-instance events for the specified source. If you specify a SourceType but do not specify a SourceIdentifier, you receive notice of the events for that source type for all your Neptune sources. If you do not specify either the SourceType nor the SourceIdentifier, you are notified of events generated from all Neptune sources belonging to your customer account. The DeleteDBCluster action deletes a previously provisioned DB cluster. When you delete a DB cluster, all automated backups for that DB cluster are deleted and can't be recovered. Manual DB cluster snapshots of the specified DB cluster are not deleted. Note that the DB Cluster cannot be deleted if deletion protection is enabled. To delete it, you must first set its Deletes a custom endpoint and removes it from an Amazon Neptune DB cluster. Deletes a specified DB cluster parameter group. The DB cluster parameter group to be deleted can't be associated with any DB clusters. Deletes a DB cluster snapshot. If the snapshot is being copied, the copy operation is terminated. The DB cluster snapshot must be in the The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by If you request a final DB snapshot the status of the Amazon Neptune DB instance is Note that when a DB instance is in a failure state and has a status of You can't delete a DB instance if it is the only instance in the DB cluster, or if it has deletion protection enabled. Deletes a specified DBParameterGroup. The DBParameterGroup to be deleted can't be associated with any DB instances. Deletes a DB subnet group. The specified database subnet group must not be associated with any DB instances. Deletes an event notification subscription. Returns information about endpoints for an Amazon Neptune DB cluster. This operation can also return information for Amazon RDS clusters and Amazon DocDB clusters. Returns a list of Returns the detailed parameter list for a particular DB cluster parameter group. Returns a list of DB cluster snapshot attribute names and values for a manual DB cluster snapshot. When sharing snapshots with other AWS accounts, To add or remove access for an AWS account to copy or restore a manual DB cluster snapshot, or to make the manual DB cluster snapshot public or private, use the ModifyDBClusterSnapshotAttribute API action. Forces a failover for a DB cluster. A failover for a DB cluster promotes one of the Read Replicas (read-only instances) in the DB cluster to be the primary instance (the cluster writer). Amazon Neptune will automatically fail over to a Read Replica, if one exists, when the primary instance fails. You can force a failover when you want to simulate a failure of a primary instance for testing. Because each instance in a DB cluster has its own endpoint address, you will need to clean up and re-establish any existing connections that use those endpoint addresses when the failover is complete. Lists all tags on an Amazon Neptune resource. Modify a setting for a DB cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. Modifies the properties of an endpoint in an Amazon Neptune DB cluster. Modifies the parameters of a DB cluster parameter group. To modify more than one parameter, submit a list of the following: Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot without failover to the DB cluster associated with the parameter group before the change can take effect. After you create a DB cluster parameter group, you should wait at least 5 minutes before creating your first DB cluster that uses that DB cluster parameter group as the default parameter group. This allows Amazon Neptune to fully complete the create action before the parameter group is used as the default for a new DB cluster. This is especially important for parameters that are critical when creating the default database for a DB cluster, such as the character set for the default database defined by the Adds an attribute and values to, or removes an attribute and values from, a manual DB cluster snapshot. To share a manual DB cluster snapshot with other AWS accounts, specify To view which AWS accounts have access to copy or restore a manual DB cluster snapshot, or whether a manual DB cluster snapshot public or private, use the DescribeDBClusterSnapshotAttributes API action. Modifies settings for a DB instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. To learn what modifications you can make to your DB instance, call DescribeValidDBInstanceModifications before you call ModifyDBInstance. True to copy all tags from the source DB cluster snapshot to the target DB cluster snapshot, and otherwise false. The default is false. Specifies whether the DB cluster is encrypted. True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false. Default: Not supported by Neptune. A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is enabled. Specifies if the DB instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the MultiAZ parameter is set to true. Indicates that minor engine upgrades are applied automatically to the DB instance during the maintenance window. Default: This data type represents the information you need to connect to an Amazon Neptune DB cluster. This data type is used as a response element in the following actions: For the data structure that represents Amazon Neptune DB instance endpoints, see This data type represents the information you need to connect to an Amazon Neptune DB cluster. This data type is used as a response element in the following actions: For the data structure that represents Amazon Neptune DB instance endpoints, see The specified custom endpoint cannot be created because it already exists. Contains the details of the endpoints associated with the cluster and matching any filter conditions. The specified custom endpoint doesn't exist. The cluster already has the maximum number of custom endpoints. This data type represents the information you need to connect to an Amazon Neptune DB cluster. This data type is used as a response element in the following actions: For the data structure that represents Amazon RDS DB instance endpoints, see Specifies a connection endpoint. Specifies a connection endpoint. For the data structure that represents Amazon Neptune DB cluster endpoints, see Specifies the connection endpoint. A set of name-value pairs that define which endpoints to include in the output. The filters are specified as name-value pairs, in the format This parameter is not currently supported. This parameter is not currently supported. This parameter is not currently supported. Specifies the Provisioned IOPS (I/O operations per second) value. The interval, in seconds, between points when Enhanced Monitoring metrics are collected for the DB instance. A value that specifies the order in which a Read Replica is promoted to the primary instance after a failure of the existing primary instance. The maximum number of records to include in the response. If more records exist than the specified Default: 100 Constraints: Minimum 20, maximum 100. The maximum number of records to include in the response. If more records exist than the specified Default: 100 Constraints: Minimum 20, maximum 100. The maximum number of records to include in the response. If more records exist than the specified Default: 100 Constraints: Minimum 20, maximum 100. The maximum number of records to include in the response. If more records exist than the specified Default: 100 Constraints: Minimum 20, maximum 100. The port number on which the new DB cluster accepts connections. Constraints: Value must be Default: The same port as the original DB cluster. The requested operation cannot be performed on the endpoint while the endpoint is in this state. The supplied value is not a valid DB cluster snapshot state. The list of logs that the restored DB cluster is to export to CloudWatch Logs. This data type represents the information you need to connect to an Amazon Aurora DB cluster. This data type is used as a response element in the following actions: For the data structure that represents Amazon RDS DB instance endpoints, see The name of the DB cluster to associate the IAM role with. The Amazon Resource Name (ARN) of the IAM role to associate with the Neptune DB cluster, for example The name of the feature for the Neptune DB cluster that the IAM role is to be associated with. For the list of supported feature names, see DBEngineVersion. The name of the event notification subscription you want to add a source identifier to. The identifier of the event source to be added. Constraints: If the source type is a DB instance, then a If the source type is a DB security group, a If the source type is a DB parameter group, a If the source type is a DB snapshot, a The Amazon Neptune resource that the tags are added to. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an Amazon Resource Name (ARN). The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN). Constraints: Must specify a valid DB parameter group. Must specify a valid DB parameter group identifier, for example The identifier for the copied DB parameter group. Constraints: Cannot be null, empty, or blank. Must contain from 1 to 255 letters, numbers, or hyphens. First character must be a letter. Cannot end with a hyphen or contain two consecutive hyphens. Example: A description for the copied DB parameter group. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. The identifier to use for the new endpoint. This parameter is stored as a lowercase string. The type of the endpoint. One of: The identifier associated with the endpoint. This parameter is stored as a lowercase string. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. A unique system-generated identifier for an endpoint. It remains the same for the whole life of the endpoint. The DNS address of the endpoint. The current status of the endpoint. One of: The type of the endpoint. One of: The type associated with a custom endpoint. One of: The Amazon Resource Name (ARN) for the endpoint. (Not supported by Neptune) The name for your database of up to 64 alpha-numeric characters. If you do not provide a name, Amazon Neptune will not create a database in the DB cluster you are creating. The DB cluster identifier. This parameter is stored as a lowercase string. Constraints: Must contain from 1 to 63 letters, numbers, or hyphens. First character must be a letter. Cannot end with a hyphen or contain two consecutive hyphens. Example: The name of the DB cluster parameter group to associate with this DB cluster. If this argument is omitted, the default is used. Constraints: If supplied, must match the name of an existing DBClusterParameterGroup. A DB subnet group to associate with this DB cluster. Constraints: Must match the name of an existing DBSubnetGroup. Must not be default. Example: The name of the database engine to be used for this DB cluster. Valid Values: The version number of the database engine to use. Currently, setting this parameter has no effect. Example: The version number of the database engine to use for the new DB cluster. Example: The name of the master user for the DB cluster. Constraints: Must be 1 to 16 letters or numbers. First character must be a letter. Cannot be a reserved word for the chosen database engine. The password for the master database user. This password can contain any printable ASCII character except \"/\", \"\"\", or \"@\". Constraints: Must contain from 8 to 41 characters. (Not supported by Neptune) The AWS Region-unique, immutable identifier for the DB cluster. This identifier is found in AWS CloudTrail log entries whenever the AWS KMS key for the DB cluster is accessed. The Amazon Resource Name (ARN) for the DB cluster. Identifies the clone group to which the DB cluster is associated. The identifier associated with the endpoint. This parameter is stored as a lowercase string. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. A unique system-generated identifier for an endpoint. It remains the same for the whole life of the endpoint. The DNS address of the endpoint. The current status of the endpoint. One of: The type of the endpoint. One of: The type associated with a custom endpoint. One of: The Amazon Resource Name (ARN) for the endpoint. An optional pagination token provided by a previous Specifies the instance identifier for this member of the DB cluster. Specifies the status of the DB cluster parameter group for this member of the DB cluster. A pagination token that can be used in a subsequent DescribeDBClusters request. An optional pagination token provided by a previous The Amazon Resource Name (ARN) of the IAM role that is associated with the DB cluster. Describes the state of association between the IAM role and the DB cluster. The Status property returns one of the following values: The name of the feature associated with the AWS Identity and Access Management (IAM) role. For the list of supported feature names, see DBEngineVersion. Specifies the identifier for a DB cluster snapshot. Must match the identifier of an existing snapshot. After you restore a DB cluster using a However, if you don't specify the Specifies the DB cluster identifier of the DB cluster that this DB cluster snapshot was created from. Specifies the name of the database engine. Provides the status of the DB subnet group. The Amazon Resource Name (ARN) for the DB subnet group. An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by The identifier associated with the custom endpoint. This parameter is stored as a lowercase string. The identifier associated with the endpoint. This parameter is stored as a lowercase string. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. A unique system-generated identifier for an endpoint. It remains the same for the whole life of the endpoint. The DNS address of the endpoint. The current status of the endpoint. One of: The type of the endpoint. One of: The type associated with a custom endpoint. One of: The Amazon Resource Name (ARN) for the endpoint. The DB cluster identifier for the DB cluster to be deleted. This parameter isn't case-sensitive. Constraints: Must match an existing DBClusterIdentifier. The DB cluster snapshot identifier of the new DB cluster snapshot created when Specifying this parameter and also setting the Constraints: Must be 1 to 255 letters, numbers, or hyphens. First character must be a letter Cannot end with a hyphen or contain two consecutive hyphens The name of the DB cluster parameter group. Constraints: Must be the name of an existing DB cluster parameter group. You can't delete a default DB cluster parameter group. Cannot be associated with any DB clusters. The name of the DB parameter group. Constraints: Must be the name of an existing DB parameter group You can't delete a default DB parameter group Cannot be associated with any DB instances The name of the database subnet group to delete. You can't delete the default subnet group. Constraints: Constraints: Must match the name of an existing DBSubnetGroup. Must not be default. Example: The name of the event notification subscription you want to delete. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. The identifier of the endpoint to describe. This parameter is stored as a lowercase string. An optional pagination token provided by a previous The name of a specific DB cluster parameter group to return details for. Constraints: If supplied, must match the name of an existing DBClusterParameterGroup. An optional pagination token provided by a previous The name of a specific DB cluster parameter group to return parameter details for. Constraints: If supplied, must match the name of an existing DBClusterParameterGroup. The Amazon Neptune resource with tags to be listed. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an Amazon Resource Name (ARN). The identifier of the endpoint to modify. This parameter is stored as a lowercase string. The type of the endpoint. One of: The identifier associated with the endpoint. This parameter is stored as a lowercase string. The DB cluster identifier of the DB cluster associated with the endpoint. This parameter is stored as a lowercase string. A unique system-generated identifier for an endpoint. It remains the same for the whole life of the endpoint. The DNS address of the endpoint. The current status of the endpoint. One of: The type of the endpoint. One of: The type associated with a custom endpoint. One of: The Amazon Resource Name (ARN) for the endpoint. The DB cluster identifier for the cluster being modified. This parameter is not case-sensitive. Constraints: Must match the identifier of an existing DBCluster. The new DB cluster identifier for the DB cluster when renaming a DB cluster. This value is stored as a lowercase string. Constraints: Must contain from 1 to 63 letters, numbers, or hyphens The first character must be a letter Cannot end with a hyphen or contain two consecutive hyphens Example: The name of the DB cluster parameter group to use for the DB cluster. (Not supported by Neptune) The daily time range during which automated backups are created if automated backups are enabled, using the The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. Constraints: Must be in the format Must be in Universal Coordinated Time (UTC). Must not conflict with the preferred maintenance window. Must be at least 30 minutes. The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC). Format: The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun. Constraints: Minimum 30-minute window. The version number of the database engine. Currently, setting this parameter has no effect. To upgrade your database engine to the most recent release, use the ApplyPendingMaintenanceAction API. For a list of valid engine versions, see CreateDBInstance, or call DescribeDBEngineVersions. The version number of the database engine to which you want to upgrade. Changing this parameter results in an outage. The change is applied during the next maintenance window unless the For a list of valid engine versions, see Engine Releases for Amazon Neptune, or call DescribeDBEngineVersions. The name of the DB cluster parameter group to modify. The identifier for the DB cluster snapshot to modify the attributes for. The name of the DB cluster snapshot attribute to modify. To manage authorization for other AWS accounts to copy or restore a manual DB cluster snapshot, set this value to The DB instance identifier. This parameter is stored as a lowercase string. Constraints: Must match the identifier of an existing DBInstance. The name of the DB cluster to disassociate the IAM role from. The Amazon Resource Name (ARN) of the IAM role to disassociate from the DB cluster, for example The name of the feature for the DB cluster that the IAM role is to be disassociated from. For the list of supported feature names, see DBEngineVersion. The name of the event notification subscription you want to remove a source identifier from. The source identifier to be removed from the subscription, such as the DB instance identifier for a DB instance or the name of a security group. The Amazon Neptune resource that the tags are removed from. This value is an Amazon Resource Name (ARN). For information about creating an ARN, see Constructing an Amazon Resource Name (ARN). The DB cluster identifier of the Neptune DB cluster to be started. This parameter is stored as a lowercase string. The DB cluster identifier of the Neptune DB cluster to be stopped. This parameter is stored as a lowercase string. Specifies the identifier of the subnet. Specifies the status of the subnet. The status of the VPC security group. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. List of DB instance identifiers that are part of the custom endpoint group. List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty. Specifies a subnet. This data type is used as a response element in the DescribeDBSubnetGroups action. The tags to be assigned to the copied DB cluster parameter group. The tags to assign to the new DB cluster snapshot copy. The tags to be assigned to the copied DB parameter group. The tags to be assigned to the Amazon Neptune resource. The tags to assign to the new DB cluster. The tags to be assigned to the new DB cluster parameter group. The tags to be assigned to the DB cluster snapshot.false
.UpdateIndex
operation.AttachentFieldMappings
parameter, you must specify at least one field mapping.ExclusionPatterns
or InclusionPatterns
fields in the data type.UpdateIndex
operation.BlogFieldMappings
parameter, you must specify at least one field mapping.UpdateIndex
operation.PageFieldMappings
parameter, you must specify at least one field mapping.UpdateIndex
operation.SpaceFieldMappings
parameter, you must specify at least one field mapping.IncludeSpaces
and the ExcludeSpaces
list, the space is excluded.ExcludeSpaces
and the IncludeSpaces
list, the space is excluded.DataSourceFieldName
is a date field you must specify the date format. If the field is not a date field, an exception is thrown.DataSourceFieldName
is a date field you must specify the date format. If the field is not a date field, an exception is thrown.DataSourceFieldName
is a date field you must specify the date format. If the field is not a date field, an exception is thrown.DataSourceFieldName
is a date field you must specify the date format. If the field is not a date field, an exception is thrown.
",
"ConnectionConfiguration$SecretArn": "
",
@@ -1600,6 +1736,7 @@
"Url": {
"base": null,
"refs": {
+ "ConfluenceConfiguration$ServerUrl": "https://server.example.com:port/
. You can also use an IP address, for example, https://192.168.1.113/
.SourceDBClusterSnapshotIdentifier
must be the Amazon Resource Name (ARN) of the shared DB cluster snapshot.ReplicationSourceIdentifier
parameter to create the DB cluster as a Read Replica of another DB cluster or Amazon Neptune DB instance.CreateDBCluster
directly, deletion protection is disabled by default (when you create a new production cluster in the console, deletion protection is enabled by default). You can only delete a DB cluster if its DeletionProtection
field is set to false
.character_set_database
parameter. You can use the Parameter Groups option of the Amazon Neptune console or the DescribeDBClusterParameters command to verify that your DB cluster parameter group has been created or modified.DeletionProtection
field to False
.available
state to be deleted.DeleteDBInstance
are not deleted.deleting
until the DB snapshot is created. The API action DescribeDBInstance
is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.failed
, incompatible-restore
, or incompatible-network
, you can only delete it when the SkipFinalSnapshot
parameter is set to true
.DBClusterParameterGroup
descriptions. If a DBClusterParameterGroupName
parameter is specified, the list will contain only the description of the specified DB cluster parameter group.DescribeDBClusterSnapshotAttributes
returns the restore
attribute and a list of IDs for the AWS accounts that are authorized to copy or restore the manual DB cluster snapshot. If all
is included in the list of values for the restore
attribute, then the manual DB cluster snapshot is public and can be copied or restored by all AWS accounts.ParameterName
, ParameterValue
, and ApplyMethod
. A maximum of 20 parameters can be modified in a single request.character_set_database
parameter. You can use the Parameter Groups option of the Amazon Neptune console or the DescribeDBClusterParameters command to verify that your DB cluster parameter group has been created or modified.restore
as the AttributeName
and use the ValuesToAdd
parameter to add a list of IDs of the AWS accounts that are authorized to restore the manual DB cluster snapshot. Use the value all
to make the manual DB cluster snapshot public, which means that it can be copied or restored by all AWS accounts. Do not add the all
value for any manual DB cluster snapshots that contain private information that you don't want available to all AWS accounts. If a manual DB cluster snapshot is encrypted, it can be shared, but only by specifying a list of authorized AWS account IDs for the ValuesToAdd
parameter. You can't use all
as a value for that parameter in this case.false
true
CreateDBClusterEndpoint
DescribeDBClusterEndpoints
ModifyDBClusterEndpoint
DeleteDBClusterEndpoint
Endpoint
.
CreateDBClusterEndpoint
DescribeDBClusterEndpoints
ModifyDBClusterEndpoint
DeleteDBClusterEndpoint
Endpoint
.
CreateDBClusterEndpoint
DescribeDBClusterEndpoints
ModifyDBClusterEndpoint
DeleteDBClusterEndpoint
Endpoint
.DBClusterEndpoint
.Name=endpoint_type,Values=endpoint_type1,endpoint_type2,...
. Name
can be one of: db-cluster-endpoint-type
, db-cluster-endpoint-custom-type
, db-cluster-endpoint-id
, db-cluster-endpoint-status
. Values
for the db-cluster-endpoint-type
filter can be one or more of: reader
, writer
, custom
. Values
for the db-cluster-endpoint-custom-type
filter can be one or more of: reader
, any
. Values
for the db-cluster-endpoint-status
filter can be one or more of: available
, creating
, deleting
, inactive
, modifying
. MaxRecords
value, a pagination token called a marker is included in the response so you can retrieve the remaining results. MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.MaxRecords
value, a pagination token called a marker is included in the response so that the remaining results can be retrieved.1150-65535
CreateDBClusterEndpoint
DescribeDBClusterEndpoints
ModifyDBClusterEndpoint
DeleteDBClusterEndpoint
Endpoint
.arn:aws:iam::123456789012:role/NeptuneAccessRole
.
",
"AddTagsToResourceMessage$ResourceName": "DBInstanceIdentifier
must be supplied.DBSecurityGroupName
must be supplied.DBParameterGroupName
must be supplied.DBSnapshotIdentifier
must be supplied.
",
"CopyDBParameterGroupMessage$TargetDBParameterGroupIdentifier": "my-db-param-group
, or a valid ARN.
my-db-parameter-group
READER
, WRITER
, ANY
.creating
, available
, deleting
, inactive
, modifying
. The inactive
state applies to an endpoint that cannot be used for a certain kind of cluster, such as a writer
endpoint for a read-only secondary cluster in a global database.READER
, WRITER
, CUSTOM
.READER
, WRITER
, ANY
.
my-cluster1
",
"CreateDBClusterMessage$DBSubnetGroupName": "mySubnetgroup
neptune
1.0.1
1.0.2.1
",
"CreateDBClusterMessage$MasterUserPassword": "creating
, available
, deleting
, inactive
, modifying
. The inactive
state applies to an endpoint that cannot be used for a certain kind of cluster, such as a writer
endpoint for a read-only secondary cluster in a global database.READER
, WRITER
, CUSTOM
.READER
, WRITER
, ANY
.DescribeDBClusterEndpoints
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
. DescribeDBClusterParameterGroups
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
",
+ "DBClusterRole$FeatureName": "ACTIVE
- the IAM role ARN is associated with the DB cluster and can be used to access other AWS services on your behalf.PENDING
- the IAM role ARN is being associated with the DB cluster.INVALID
- the IAM role ARN is associated with the DB cluster, but the DB cluster is unable to assume the IAM role in order to access other AWS services on your behalf.DBClusterSnapshotIdentifier
, you must specify the same DBClusterSnapshotIdentifier
for any future updates to the DB cluster. When you specify this property for an update, the DB cluster is not restored from the snapshot again, and the data in the database is not changed.DBClusterSnapshotIdentifier
, an empty DB cluster is created, and the original DB cluster is deleted. If you specify a property that is different from the previous snapshot restore property, the DB cluster is restored from the snapshot specified by the DBClusterSnapshotIdentifier
, and the original DB cluster is deleted.MaxRecords
.creating
, available
, deleting
, inactive
, modifying
. The inactive
state applies to an endpoint that cannot be used for a certain kind of cluster, such as a writer
endpoint for a read-only secondary cluster in a global database.READER
, WRITER
, CUSTOM
.READER
, WRITER
, ANY
.
",
"DeleteDBClusterMessage$FinalDBSnapshotIdentifier": "SkipFinalSnapshot
is set to false
.SkipFinalShapshot
parameter to true results in an error.
",
"DeleteDBClusterParameterGroupMessage$DBClusterParameterGroupName": "
",
@@ -1758,6 +1867,9 @@
"DeleteDBParameterGroupMessage$DBParameterGroupName": "
",
"DeleteDBSubnetGroupMessage$DBSubnetGroupName": "mySubnetgroup
DescribeDBClusterEndpoints
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
",
"DescribeDBClusterParameterGroupsMessage$Marker": "DescribeDBClusterParameterGroups
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
",
@@ -1829,6 +1941,16 @@
"KeyList$member": null,
"ListTagsForResourceMessage$ResourceName": "READER
, WRITER
, ANY
.creating
, available
, deleting
, inactive
, modifying
. The inactive
state applies to an endpoint that cannot be used for a certain kind of cluster, such as a writer
endpoint for a read-only secondary cluster in a global database.READER
, WRITER
, CUSTOM
.READER
, WRITER
, ANY
.
",
"ModifyDBClusterMessage$NewDBClusterIdentifier": "
my-cluster2
BackupRetentionPeriod
parameter.
",
"ModifyDBClusterMessage$PreferredMaintenanceWindow": "hh24:mi-hh24:mi
.ddd:hh24:mi-ddd:hh24:mi
ApplyImmediately
parameter is set to true.restore
.
",
"RemoveRoleFromDBClusterMessage$DBClusterIdentifier": "arn:aws:iam::123456789012:role/NeptuneAccessRole
.
Create a machine learning algorithm that you can use in Amazon SageMaker and list in the AWS Marketplace.
", "CreateApp": "Creates a running App for the specified UserProfile. Supported Apps are JupyterServer and KernelGateway. This operation is automatically invoked by Amazon SageMaker Studio upon access to the associated Domain, and when new kernel configurations are selected by the user. A user may have multiple Apps active simultaneously.
", + "CreateAppImageConfig": "Creates a configuration for running an Amazon SageMaker image as a KernelGateway app.
", "CreateAutoMLJob": "Creates an Autopilot job.
Find the best performing model after you run an Autopilot job by calling . Deploy that model by following the steps described in Step 6.1: Deploy the Model to Amazon SageMaker Hosting Services.
For information about how to use Autopilot, see Automate Model Development with Amazon SageMaker Autopilot.
", "CreateCodeRepository": "Creates a Git repository as a resource in your Amazon SageMaker account. You can associate the repository with notebook instances so that you can use Git source control for the notebooks you create. The Git repository is a resource in your Amazon SageMaker account, so it can be associated with more than one notebook instance, and it persists independently from the lifecycle of any notebook instances it is associated with.
The repository can be hosted either in AWS CodeCommit or in any other Git repository.
", "CreateCompilationJob": "Starts a model compilation job. After the model has been compiled, Amazon SageMaker saves the resulting model artifacts to an Amazon Simple Storage Service (Amazon S3) bucket that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts with AWS IoT Greengrass. In that case, deploy them as an ML resource.
In the request body, you provide the following:
A name for the compilation job
Information about the input model artifacts
The output location for the compiled model and the device (target) that the model runs on
The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker assumes to perform the model compilation job.
You can also provide a Tag
to track the model compilation job's resource use and costs. The response body contains the CompilationJobArn
for the compiled job.
To stop a model compilation job, use StopCompilationJob. To get information about a particular model compilation job, use DescribeCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.
", - "CreateDomain": "Creates a Domain
used by Amazon SageMaker Studio. A domain consists of an associated Amazon Elastic File System (EFS) volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. An AWS account is limited to one domain per region. Users within a domain can share notebook files and other artifacts with each other.
When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files.
VPC configuration
All SageMaker Studio traffic between the domain and the EFS volume is through the specified VPC and subnets. For other Studio traffic, you can specify the AppNetworkAccessType
parameter. AppNetworkAccessType
corresponds to the network access type that you choose when you onboard to Studio. The following options are available:
PublicInternetOnly
- Non-EFS traffic goes through a VPC managed by Amazon SageMaker, which allows internet access. This is the default value.
VpcOnly
- All Studio traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway.
When internet access is disabled, you won't be able to train or host models unless your VPC has an interface endpoint (PrivateLink) or a NAT gateway and your security groups allow outbound connections.
VpcOnly
network access type
When you choose VpcOnly
, you must specify the following:
Security group inbound and outbound rules to allow NFS traffic over TCP on port 2049 between the domain and the EFS volume
Security group inbound and outbound rules to allow traffic between the JupyterServer app and the KernelGateway apps
Interface endpoints to access the SageMaker API and SageMaker runtime
For more information, see:
Creates a Domain
used by Amazon SageMaker Studio. A domain consists of an associated Amazon Elastic File System (EFS) volume, a list of authorized users, and a variety of security, application, policy, and Amazon Virtual Private Cloud (VPC) configurations. An AWS account is limited to one domain per region. Users within a domain can share notebook files and other artifacts with each other.
When a domain is created, an EFS volume is created for use by all of the users within the domain. Each user receives a private home directory within the EFS volume for notebooks, Git repositories, and data files.
VPC configuration
All SageMaker Studio traffic between the domain and the EFS volume is through the specified VPC and subnets. For other Studio traffic, you can specify the AppNetworkAccessType
parameter. AppNetworkAccessType
corresponds to the network access type that you choose when you onboard to Studio. The following options are available:
PublicInternetOnly
- Non-EFS traffic goes through a VPC managed by Amazon SageMaker, which allows internet access. This is the default value.
VpcOnly
- All Studio traffic is through the specified VPC and subnets. Internet access is disabled by default. To allow internet access, you must specify a NAT gateway.
When internet access is disabled, you won't be able to run a Studio notebook or to train or host models unless your VPC has an interface endpoint to the SageMaker API and runtime or a NAT gateway and your security groups allow outbound connections.
For more information, see Connect SageMaker Studio Notebooks to Resources in a VPC.
", "CreateEndpoint": "Creates an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. You create the endpoint configuration with the CreateEndpointConfig API.
Use this API to deploy models using Amazon SageMaker hosting services.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
You must not delete an EndpointConfig
that is in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
The endpoint name must be unique within an AWS Region in your AWS account.
When it receives the request, Amazon SageMaker creates the endpoint, launches the resources (ML compute instances), and deploys the model(s) on them.
When you call CreateEndpoint, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads
, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
When Amazon SageMaker receives the request, it sets the endpoint status to Creating
. After it creates the endpoint, it sets the status to InService
. Amazon SageMaker can then process incoming requests for inferences. To check the status of an endpoint, use the DescribeEndpoint API.
If any of the models hosted at this endpoint get model data from an Amazon S3 location, Amazon SageMaker uses AWS Security Token Service to download model artifacts from the S3 path you provided. AWS STS is activated in your IAM user account by default. If you previously deactivated AWS STS for a region, you need to reactivate AWS STS for that region. For more information, see Activating and Deactivating AWS STS in an AWS Region in the AWS Identity and Access Management User Guide.
", "CreateEndpointConfig": "Creates an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel
API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the CreateEndpoint API.
Use this API if you want to use Amazon SageMaker hosting services to deploy models into production.
In the request, you define a ProductionVariant
, for each model that you want to deploy. Each ProductionVariant
parameter also describes the resources that you want Amazon SageMaker to provision. This includes the number and type of ML compute instances to deploy.
If you are hosting multiple models, you also assign a VariantWeight
to specify how much traffic you want to allocate to each model. For example, suppose that you want to host two models, A and B, and you assign traffic weight 2 for model A and 1 for model B. Amazon SageMaker distributes two-thirds of the traffic to Model A, and one-third to model B.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
When you call CreateEndpoint, a load call is made to DynamoDB to verify that your endpoint configuration exists. When you read data from a DynamoDB table supporting Eventually Consistent Reads
, the response might not reflect the results of a recently completed write operation. The response might include some stale data. If the dependent entities are not yet in DynamoDB, this causes a validation error. If you repeat your read request after a short time, the response should return the latest data. So retry logic is recommended to handle these possible issues. We also recommend that customers call DescribeEndpointConfig before calling CreateEndpoint to minimize the potential impact of a DynamoDB eventually consistent read.
Creates an SageMaker experiment. An experiment is a collection of trials that are observed, compared and evaluated as a group. A trial is a set of steps, called trial components, that produce a machine learning model.
The goal of an experiment is to determine the components that produce the best model. Multiple trials are performed, each one isolating and measuring the impact of a change to one or more inputs, while keeping the remaining inputs constant.
When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK.
You can add tags to experiments, trials, trial components and then use the Search API to search for the tags.
To add a description to an experiment, specify the optional Description
parameter. To add a description later, or to change the description, call the UpdateExperiment API.
To get a list of all your experiments, call the ListExperiments API. To view an experiment's properties, call the DescribeExperiment API. To get a list of all the trials associated with an experiment, call the ListTrials API. To create a trial call the CreateTrial API.
", "CreateFlowDefinition": "Creates a flow definition.
", "CreateHumanTaskUi": "Defines the settings you will use for the human review workflow user interface. Reviewers will see a three-panel interface with an instruction area, the item to review, and an input area.
", "CreateHyperParameterTuningJob": "Starts a hyperparameter tuning job. A hyperparameter tuning job finds the best version of a model by running many training jobs on your dataset using the algorithm you choose and values for hyperparameters within ranges that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by an objective metric that you choose.
", + "CreateImage": "Creates a SageMaker Image
. A SageMaker image represents a set of container images. Each of these container images is represented by a SageMaker ImageVersion
.
Creates a version of the SageMaker image specified by ImageName
. The version represents the Amazon Container Registry (ECR) container image specified by BaseImage
.
Creates a job that uses workers to label the data objects in your input dataset. You can use the labeled data to train machine learning models.
You can select your workforce from one of three providers:
A private workforce that you create. It can include employees, contractors, and outside experts. Use a private workforce when want the data to stay within your organization or when a specific set of skills is required.
One or more vendors that you select from the AWS Marketplace. Vendors provide expertise in specific areas.
The Amazon Mechanical Turk workforce. This is the largest workforce, but it should only be used for public data or data that has been stripped of any personally identifiable information.
You can also use automated data labeling to reduce the number of data objects that need to be labeled by a human. Automated data labeling uses active learning to determine if a data object can be labeled by machine or if it needs to be sent to a human worker. For more information, see Using Automated Data Labeling.
The data objects to be labeled are contained in an Amazon S3 bucket. You create a manifest file that describes the location of each object. For more information, see Using Input and Output Data.
The output can be used as the manifest file for another labeling job or as training data for your machine learning models.
", "CreateModel": "Creates a model in Amazon SageMaker. In the request, you name the model and describe a primary container. For the primary container, you specify the Docker image that contains inference code, artifacts (from prior training), and a custom environment map that the inference code uses when you deploy the model for predictions.
Use this API to create a model if you want to use Amazon SageMaker hosting services or run a batch transform job.
To host your model, you create an endpoint configuration with the CreateEndpointConfig
API, and then create an endpoint with the CreateEndpoint
API. Amazon SageMaker then deploys all of the containers that you defined for the model in the hosting environment.
For an example that calls this method when deploying a model to Amazon SageMaker hosting services, see Deploy the Model to Amazon SageMaker Hosting Services (AWS SDK for Python (Boto 3)).
To run a batch transform using your model, you start a job with the CreateTransformJob
API. Amazon SageMaker uses your model and your dataset to get inferences which are then saved to a specified S3 location.
In the CreateModel
request, you must define a container with the PrimaryContainer
parameter.
In the request, you also provide an IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute hosting instances or for batch transform jobs. In addition, you also use the IAM role to manage permissions the inference code needs. For example, if the inference code access any other AWS resources, you grant necessary permissions via this role.
", "CreateModelPackage": "Creates a model package that you can use to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker.
To create a model package by specifying a Docker container that contains your inference code and the Amazon S3 location of your model artifacts, provide values for InferenceSpecification
. To create a model from an algorithm resource that you created or subscribed to in AWS Marketplace, provide a value for SourceAlgorithmSpecification
.
Creates a new work team for labeling your data. A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team.
You cannot create more than 25 work teams in an account and region.
", "DeleteAlgorithm": "Removes the specified algorithm from your account.
", "DeleteApp": "Used to stop and delete an app.
", + "DeleteAppImageConfig": "Deletes an AppImageConfig.
", "DeleteCodeRepository": "Deletes the specified Git repository from your account.
", "DeleteDomain": "Used to delete a domain. If you onboarded with IAM mode, you will need to delete your domain to onboard again using SSO. Use with caution. All of the members of the domain will lose access to their EFS volume, including data, notebooks, and other artifacts.
", "DeleteEndpoint": "Deletes an endpoint. Amazon SageMaker frees up all of the resources that were deployed when the endpoint was created.
Amazon SageMaker retires any custom KMS key grants associated with the endpoint, meaning you don't need to use the RevokeGrant API call.
", @@ -41,6 +45,8 @@ "DeleteExperiment": "Deletes an Amazon SageMaker experiment. All trials associated with the experiment must be deleted first. Use the ListTrials API to get a list of the trials associated with the experiment.
", "DeleteFlowDefinition": "Deletes the specified flow definition.
", "DeleteHumanTaskUi": "Use this operation to delete a human task user interface (worker task template).
To see a list of human task user interfaces (work task templates) in your account, use . When you delete a worker task template, it no longer appears when you call ListHumanTaskUis
.
Deletes a SageMaker image and all versions of the image. The container images aren't deleted.
", + "DeleteImageVersion": "Deletes a version of a SageMaker image. The container image the version represents isn't deleted.
", "DeleteModel": "Deletes a model. The DeleteModel
API deletes only the model entry that was created in Amazon SageMaker when you called the CreateModel API. It does not delete model artifacts, inference code, or the IAM role that you specified when creating the model.
Deletes a model package.
A model package is used to create Amazon SageMaker models or list on AWS Marketplace. Buyers can subscribe to model packages listed on AWS Marketplace to create models in Amazon SageMaker.
", "DeleteMonitoringSchedule": "Deletes a monitoring schedule. Also stops the schedule had not already been stopped. This does not delete the job execution history of the monitoring schedule.
", @@ -54,6 +60,7 @@ "DeleteWorkteam": "Deletes an existing work team. This operation can't be undone.
", "DescribeAlgorithm": "Returns a description of the specified algorithm that is in your account.
", "DescribeApp": "Describes the app.
", + "DescribeAppImageConfig": "Describes an AppImageConfig.
", "DescribeAutoMLJob": "Returns information about an Amazon SageMaker job.
", "DescribeCodeRepository": "Gets details about the specified Git repository.
", "DescribeCompilationJob": "Returns information about a model compilation job.
To create a model compilation job, use CreateCompilationJob. To get information about multiple model compilation jobs, use ListCompilationJobs.
", @@ -64,6 +71,8 @@ "DescribeFlowDefinition": "Returns information about the specified flow definition.
", "DescribeHumanTaskUi": "Returns information about the requested human task user interface (worker task template).
", "DescribeHyperParameterTuningJob": "Gets a description of a hyperparameter tuning job.
", + "DescribeImage": "Describes a SageMaker image.
", + "DescribeImageVersion": "Describes a version of a SageMaker image.
", "DescribeLabelingJob": "Gets information about a labeling job.
", "DescribeModel": "Describes a model that you created using the CreateModel
API.
Returns a description of the specified model package, which is used to create Amazon SageMaker models or list them on AWS Marketplace.
To create models in Amazon SageMaker, buyers can subscribe to model packages listed on AWS Marketplace.
", @@ -82,6 +91,7 @@ "DisassociateTrialComponent": "Disassociates a trial component from a trial. This doesn't effect other trials the component is associated with. Before you can delete a component, you must disassociate the component from all trials it is associated with. To associate a trial component with a trial, call the AssociateTrialComponent API.
To get a list of the trials a component is associated with, use the Search API. Specify ExperimentTrialComponent
for the Resource
parameter. The list appears in the response under Results.TrialComponent.Parents
.
An auto-complete API for the search functionality in the Amazon SageMaker console. It returns suggestions of possible matches for the property name to use in Search
queries. Provides suggestions for HyperParameters
, Tags
, and Metrics
.
Lists the machine learning algorithms that have been created.
", + "ListAppImageConfigs": "Lists the AppImageConfigs in your account and their properties. The list can be filtered by creation time or modified time, and whether the AppImageConfig name contains a specified string.
", "ListApps": "Lists apps.
", "ListAutoMLJobs": "Request a list of jobs.
", "ListCandidatesForAutoMLJob": "List the Candidates created for the job.
", @@ -94,6 +104,8 @@ "ListFlowDefinitions": "Returns information about the flow definitions in your account.
", "ListHumanTaskUis": "Returns information about the human task user interfaces in your account.
", "ListHyperParameterTuningJobs": "Gets a list of HyperParameterTuningJobSummary objects that describe the hyperparameter tuning jobs launched in your account.
", + "ListImageVersions": "Lists the versions of a specified image and their properties. The list can be filtered by creation time or modified time.
", + "ListImages": "Lists the images in your account and their properties. The list can be filtered by creation time or modified time, and whether the image name contains a specified string.
", "ListLabelingJobs": "Gets a list of labeling jobs.
", "ListLabelingJobsForWorkteam": "Gets a list of labeling jobs assigned to a specified work team.
", "ListModelPackages": "Lists the model packages that have been created.
", @@ -126,11 +138,13 @@ "StopProcessingJob": "Stops a processing job.
", "StopTrainingJob": "Stops a training job. To stop a job, Amazon SageMaker sends the algorithm the SIGTERM
signal, which delays job termination for 120 seconds. Algorithms might use this 120-second window to save the model artifacts, so the results of the training is not lost.
When it receives a StopTrainingJob
request, Amazon SageMaker changes the status of the job to Stopping
. After Amazon SageMaker stops the job, it sets the status to Stopped
.
Stops a transform job.
When Amazon SageMaker receives a StopTransformJob
request, the status of the job changes to Stopping
. After Amazon SageMaker stops the job, the status is set to Stopped
. When you stop a transform job before it is completed, Amazon SageMaker doesn't store the job's output in Amazon S3.
Updates the properties of an AppImageConfig.
", "UpdateCodeRepository": "Updates the specified Git repository with the specified values.
", "UpdateDomain": "Updates the default settings for new user profiles in the domain.
", "UpdateEndpoint": "Deploys the new EndpointConfig
specified in the request, switches to using newly created endpoint, and then deletes resources provisioned for the endpoint using the previous EndpointConfig
(there is no availability loss).
When Amazon SageMaker receives the request, it sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
You must not delete an EndpointConfig
in use by an endpoint that is live or while the UpdateEndpoint
or CreateEndpoint
operations are being performed on the endpoint. To update an endpoint, you must create a new EndpointConfig
.
If you delete the EndpointConfig
of an endpoint that is active or being created or updated you may lose visibility into the instance type the endpoint is using. The endpoint must be deleted in order to stop incurring charges.
Updates variant weight of one or more variants associated with an existing endpoint, or capacity of one variant associated with an existing endpoint. When it receives the request, Amazon SageMaker sets the endpoint status to Updating
. After updating the endpoint, it sets the status to InService
. To check the status of an endpoint, use the DescribeEndpoint API.
Adds, updates, or removes the description of an experiment. Updates the display name of an experiment.
", + "UpdateImage": "Updates the properties of a SageMaker image. To change the image's tags, use the AddTags and DeleteTags APIs.
", "UpdateMonitoringSchedule": "Updates a previously created schedule.
", "UpdateNotebookInstance": "Updates a notebook instance. NotebookInstance updates include upgrading or downgrading the ML compute instance used for your notebook instance to accommodate changes in your workload requirements.
", "UpdateNotebookInstanceLifecycleConfig": "Updates a notebook instance lifecycle configuration created with the CreateNotebookInstanceLifecycleConfig API.
", @@ -268,20 +282,60 @@ "AppArn": { "base": null, "refs": { - "CreateAppResponse$AppArn": "The App's Amazon Resource Name (ARN).
", - "DescribeAppResponse$AppArn": "The app's Amazon Resource Name (ARN).
" + "CreateAppResponse$AppArn": "The Amazon Resource Name (ARN) of the app.
", + "DescribeAppResponse$AppArn": "The Amazon Resource Name (ARN) of the app.
" } }, "AppDetails": { - "base": "The app's details.
", + "base": "Details about an Amazon SageMaker app.
", "refs": { "AppList$member": null } }, + "AppImageConfigArn": { + "base": null, + "refs": { + "AppImageConfigDetails$AppImageConfigArn": "The Amazon Resource Name (ARN) of the AppImageConfig.
", + "CreateAppImageConfigResponse$AppImageConfigArn": "The Amazon Resource Name (ARN) of the AppImageConfig.
", + "DescribeAppImageConfigResponse$AppImageConfigArn": "The Amazon Resource Name (ARN) of the AppImageConfig.
", + "UpdateAppImageConfigResponse$AppImageConfigArn": "The Amazon Resource Name (ARN) for the AppImageConfig.
" + } + }, + "AppImageConfigDetails": { + "base": "The configuration for running an Amazon SageMaker image as a KernelGateway app.
", + "refs": { + "AppImageConfigList$member": null + } + }, + "AppImageConfigList": { + "base": null, + "refs": { + "ListAppImageConfigsResponse$AppImageConfigs": "A list of AppImageConfigs and their properties.
" + } + }, + "AppImageConfigName": { + "base": null, + "refs": { + "AppImageConfigDetails$AppImageConfigName": "The name of the AppImageConfig.
", + "CreateAppImageConfigRequest$AppImageConfigName": "The name of the AppImageConfig. Must be unique to your account.
", + "CustomImage$AppImageConfigName": "The name of the AppImageConfig.
", + "DeleteAppImageConfigRequest$AppImageConfigName": "The name of the AppImageConfig to delete.
", + "DescribeAppImageConfigRequest$AppImageConfigName": "The name of the AppImageConfig to describe.
", + "DescribeAppImageConfigResponse$AppImageConfigName": "The name of the AppImageConfig.
", + "ListAppImageConfigsRequest$NameContains": "A filter that returns only AppImageConfigs whose name contains the specified string.
", + "UpdateAppImageConfigRequest$AppImageConfigName": "The name of the AppImageConfig to update.
" + } + }, + "AppImageConfigSortKey": { + "base": null, + "refs": { + "ListAppImageConfigsRequest$SortBy": "The property used to sort results. The default value is CreationTime
.
The instance type.
" + "ResourceSpec$InstanceType": "The instance type that the image version runs on.
" } }, "AppList": { @@ -342,9 +396,7 @@ "base": null, "refs": { "AlgorithmSpecification$AlgorithmName": "The name of the algorithm resource to use for the training job. This must be an algorithm resource that you created or subscribe to on AWS Marketplace. If you specify a value for this parameter, you can't specify a value for TrainingImage
.
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
", "DescribeAlgorithmInput$AlgorithmName": "The name of the algorithm to describe.
", - "DescribeModelPackageInput$ModelPackageName": "The name of the model package to describe.
", "HyperParameterAlgorithmSpecification$AlgorithmName": "The name of the resource algorithm to use for the hyperparameter tuning job. If you specify a value for this parameter, do not specify a value for TrainingImage
.
The name of an algorithm that was used to create the model package. The algorithm must be either an algorithm resource in your Amazon SageMaker account or an algorithm in AWS Marketplace that you are subscribed to.
" } @@ -822,6 +874,12 @@ "OidcConfig$ClientSecret": "The OIDC IdP client secret used to configure your private workforce.
" } }, + "ClientToken": { + "base": null, + "refs": { + "CreateImageVersionRequest$ClientToken": "A unique ID. If not specified, the AWS CLI and AWS SDKs, such as the SDK for Python (Boto3), add a unique value to the call.
" + } + }, "CodeRepositoryArn": { "base": null, "refs": { @@ -959,7 +1017,7 @@ "CompilerOptions": { "base": null, "refs": { - "OutputConfig$CompilerOptions": "Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform
specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.
CPU
: Compilation for CPU supports the following compiler options.
mcpu
: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}
mattr
: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}
ARM
: Details of ARM CPU compilations.
NEON
: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']}
to the compiler options if compiling for ARM 32-bit platform with the NEON support.
NVIDIA
: Compilation for NVIDIA GPU supports the following compiler options.
gpu_code
: Specifies the targeted architecture.
trt-ver
: Specifies the TensorRT versions in x.y.z. format.
cuda-ver
: Specifies the CUDA version in x.y format.
For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID
: Compilation for the Android OS supports the following compiler options:
ANDROID_PLATFORM
: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}
.
mattr
: Add {'mattr': ['+neon']}
to compiler options if compiling for ARM 32-bit platform with NEON support.
CoreML
: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:
class_labels
: Specifies the classification labels file name inside input tar.gz file. For example, {\"class_labels\": \"imagenet_labels_1000.txt\"}
. Labels inside the txt file should be separated by newlines.
Specifies additional parameters for compiler options in JSON format. The compiler options are TargetPlatform
specific. It is required for NVIDIA accelerators and highly recommended for CPU compilations. For any other cases, it is optional to specify CompilerOptions.
CPU
: Compilation for CPU supports the following compiler options.
mcpu
: CPU micro-architecture. For example, {'mcpu': 'skylake-avx512'}
mattr
: CPU flags. For example, {'mattr': ['+neon', '+vfpv4']}
ARM
: Details of ARM CPU compilations.
NEON
: NEON is an implementation of the Advanced SIMD extension used in ARMv7 processors.
For example, add {'mattr': ['+neon']}
to the compiler options if compiling for ARM 32-bit platform with the NEON support.
NVIDIA
: Compilation for NVIDIA GPU supports the following compiler options.
gpu_code
: Specifies the targeted architecture.
trt-ver
: Specifies the TensorRT versions in x.y.z. format.
cuda-ver
: Specifies the CUDA version in x.y format.
For example, {'gpu-code': 'sm_72', 'trt-ver': '6.0.1', 'cuda-ver': '10.1'}
ANDROID
: Compilation for the Android OS supports the following compiler options:
ANDROID_PLATFORM
: Specifies the Android API levels. Available levels range from 21 to 29. For example, {'ANDROID_PLATFORM': 28}
.
mattr
: Add {'mattr': ['+neon']}
to compiler options if compiling for ARM 32-bit platform with NEON support.
INFERENTIA
: Compilation for target ml_inf1 uses compiler options passed in as a JSON string. For example, \"CompilerOptions\": \"\\\"--verbose 1 --num-neuroncores 2 -O2\\\"\"
.
For information about supported compiler options, see Neuron Compiler CLI.
CoreML
: Compilation for the CoreML OutputConfig$TargetDevice supports the following compiler options:
class_labels
: Specifies the classification labels file name inside input tar.gz file. For example, {\"class_labels\": \"imagenet_labels_1000.txt\"}
. Labels inside the txt file should be separated by newlines.
A custom image.
", + "refs": { + "CustomImages$member": null + } + }, + "CustomImages": { + "base": null, + "refs": { + "KernelGatewayAppSettings$CustomImages": "A list of custom images that are configured to run as a KernelGateway app.
" + } + }, "DataCaptureConfig": { "base": "", "refs": { @@ -1509,11 +1609,28 @@ "TrainingJob$DebugRuleEvaluationStatuses": "Information about the evaluation status of the rules for the training job.
" } }, + "DefaultGid": { + "base": null, + "refs": { + "FileSystemConfig$DefaultGid": "The default POSIX group ID. If not specified, defaults to 100
.
The default POSIX user ID. If not specified, defaults to 1000
.
The name of the endpoint. The name must be unique within an AWS Region in your AWS account.
", + "CreateEndpointInput$EndpointName": "The name of the endpoint.The name must be unique within an AWS Region in your AWS account. The name is case-insensitive in CreateEndpoint
, but the case is preserved and must be matched in .
The name of the endpoint that you want to delete.
", "DescribeEndpointInput$EndpointName": "The name of the endpoint.
", "DescribeEndpointOutput$EndpointName": "Name of the endpoint.
", @@ -2236,7 +2403,6 @@ "CreateModelPackageInput$ModelPackageName": "The name of the model package. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).
", "DeleteAlgorithmInput$AlgorithmName": "The name of the algorithm to delete.
", "DeleteCodeRepositoryInput$CodeRepositoryName": "The name of the Git repository to delete.
", - "DeleteModelPackageInput$ModelPackageName": "The name of the model package. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).
", "DescribeAlgorithmOutput$AlgorithmName": "The name of the algorithm being described.
", "DescribeCodeRepositoryInput$CodeRepositoryName": "The name of the Git repository to describe.
", "DescribeCodeRepositoryOutput$CodeRepositoryName": "The name of the Git repository.
", @@ -2418,6 +2584,8 @@ "DescribeEndpointOutput$FailureReason": "If the status of the endpoint is Failed
, the reason why it failed.
The reason your flow definition failed.
", "DescribeHyperParameterTuningJobResponse$FailureReason": "If the tuning job failed, the reason it failed.
", + "DescribeImageResponse$FailureReason": "When a create, update, or delete operation fails, the reason for the failure.
", + "DescribeImageVersionResponse$FailureReason": "When a create or delete operation fails, the reason for the failure.
", "DescribeLabelingJobResponse$FailureReason": "If the job failed, the reason that it failed.
", "DescribeMonitoringScheduleResponse$FailureReason": "A string, up to one KB in size, that contains the reason a monitoring job failed, if it failed.
", "DescribeNotebookInstanceOutput$FailureReason": "If status is Failed
, the reason it failed.
The failure reason.
", "FlowDefinitionSummary$FailureReason": "The reason why the flow definition creation failed. A failure reason is returned only when the flow definition status is Failed
.
The reason that the training job failed.
", + "Image$FailureReason": "When a create, update, or delete operation fails, the reason for the failure.
", + "ImageVersion$FailureReason": "When a create or delete operation fails, the reason for the failure.
", "LabelingJobSummary$FailureReason": "If the LabelingJobStatus
field is Failed
, this field contains a description of the error.
Contains the reason a monitoring job failed, if it failed.
", "ProcessingJob$FailureReason": "A string, up to one KB in size, that contains the reason a processing job failed, if it failed.
", @@ -2445,6 +2615,12 @@ "FileSystemDataSource$FileSystemAccessMode": "The access mode of the mount of the directory associated with the channel. A directory can be mounted either in ro
(read-only) or rw
(read-write) mode.
The Amazon Elastic File System (EFS) storage configuration for an image.
", + "refs": { + "KernelGatewayImageConfig$FileSystemConfig": "The file system configuration.
" + } + }, "FileSystemDataSource": { "base": "Specifies a file system data source for a channel.
", "refs": { @@ -2739,6 +2915,12 @@ "HyperParameterTrainingJobDefinition$AlgorithmSpecification": "The HyperParameterAlgorithmSpecification object that specifies the resource algorithm to use for the training jobs that the tuning job launches.
" } }, + "HyperParameterKey": { + "base": null, + "refs": { + "HyperParameters$key": null + } + }, "HyperParameterScalingType": { "base": null, "refs": { @@ -2814,7 +2996,7 @@ "HyperParameterTuningJobName": { "base": null, "refs": { - "CreateHyperParameterTuningJobRequest$HyperParameterTuningJobName": "The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same AWS account and AWS Region. The name must have { } to { } characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.
", + "CreateHyperParameterTuningJobRequest$HyperParameterTuningJobName": "The name of the tuning job. This name is the prefix for the names of all training jobs that this tuning job launches. The name must be unique within the same AWS account and AWS Region. The name must have 1 to 32 characters. Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name is not case sensitive.
", "DescribeHyperParameterTuningJobRequest$HyperParameterTuningJobName": "The name of the tuning job.
", "DescribeHyperParameterTuningJobResponse$HyperParameterTuningJobName": "The name of the tuning job.
", "HyperParameterTrainingJobSummary$TuningJobName": "The HyperParameter tuning job that launched the training job.
", @@ -2891,6 +3073,13 @@ "HyperParameterTuningJobWarmStartConfig$WarmStartType": "Specifies one of the following:
The new hyperparameter tuning job uses the same input data and training image as the parent tuning jobs. You can change the hyperparameter ranges to search and the maximum number of training jobs that the hyperparameter tuning job launches. You cannot use a new version of the training algorithm, unless the changes in the new version do not affect the algorithm itself. For example, changes that improve logging or adding support for a different data format are allowed. You can also change hyperparameters from tunable to static, and from static to tunable, but the total number of static plus tunable hyperparameters must remain the same as it is in all parent jobs. The objective metric for the new tuning job must be the same as for all parent jobs.
The new hyperparameter tuning job can include input data, hyperparameter ranges, maximum number of concurrent training jobs, and maximum number of training jobs that are different than those of its parent hyperparameter tuning jobs. The training image can also be a different version from the version used in the parent hyperparameter tuning job. You can also change hyperparameters from tunable to static, and from static to tunable, but the total number of static plus tunable hyperparameters must remain the same as it is in all parent jobs. The objective metric for the new tuning job must be the same as for all parent jobs.
The default value for this hyperparameter. If a default value is specified, a hyperparameter cannot be required.
", + "HyperParameters$value": null + } + }, "HyperParameters": { "base": null, "refs": { @@ -2902,10 +3091,29 @@ "TrainingJobDefinition$HyperParameters": "The hyperparameters used for the training job.
" } }, + "Image": { + "base": "A SageMaker image. A SageMaker image represents a set of container images that are derived from a common base container image. Each of these container images is represented by a SageMaker ImageVersion
.
The Amazon Resource Name (ARN) of the SageMaker image created on the instance.
" + "CreateImageResponse$ImageArn": "The Amazon Resource Name (ARN) of the image.
", + "DescribeImageResponse$ImageArn": "The Amazon Resource Name (ARN) of the image.
", + "DescribeImageVersionResponse$ImageArn": "The Amazon Resource Name (ARN) of the image the version is based on.
", + "Image$ImageArn": "The Amazon Resource Name (ARN) of the image.
", + "ImageVersion$ImageArn": "The Amazon Resource Name (ARN) of the image the version is based on.
", + "ResourceSpec$SageMakerImageArn": "The ARN of the SageMaker image that the image version belongs to.
", + "UpdateImageResponse$ImageArn": "The Amazon Resource Name (ARN) of the image.
" + } + }, + "ImageBaseImage": { + "base": null, + "refs": { + "CreateImageVersionRequest$BaseImage": "The registry path of the container image to use as the starting point for this version. The path is an Amazon Container Registry (ECR) URI in the following format:
<acct-id>.dkr.ecr.<region>.amazonaws.com/<repo-name[:tag] or [@digest]>
The registry path of the container image on which this image version is based.
" } }, "ImageConfig": { @@ -2914,6 +3122,33 @@ "ContainerDefinition$ImageConfig": "Specifies whether the model container is in Amazon ECR or a private Docker registry accessible from your Amazon Virtual Private Cloud (VPC). For information about storing containers in a private Docker registry, see Use a Private Docker Registry for Real-Time Inference Containers
" } }, + "ImageContainerImage": { + "base": null, + "refs": { + "DescribeImageVersionResponse$ContainerImage": "The registry path of the container image that contains this image version.
" + } + }, + "ImageDeleteProperty": { + "base": null, + "refs": { + "ImageDeletePropertyList$member": null + } + }, + "ImageDeletePropertyList": { + "base": null, + "refs": { + "UpdateImageRequest$DeleteProperties": "A list of properties to delete. Only the Description
and DisplayName
properties can be deleted.
The description of the image.
", + "DescribeImageResponse$Description": "The description of the image.
", + "Image$Description": "The description of the image.
", + "UpdateImageRequest$Description": "The new description for the image.
" + } + }, "ImageDigest": { "base": null, "refs": { @@ -2921,6 +3156,56 @@ "TrainingSpecification$TrainingImageDigest": "An MD5 hash of the training algorithm that identifies the Docker image used for training.
" } }, + "ImageDisplayName": { + "base": null, + "refs": { + "CreateImageRequest$DisplayName": "The display name of the image. When the image is added to a domain, DisplayName
must be unique to the domain.
The name of the image as displayed.
", + "Image$DisplayName": "The name of the image as displayed.
", + "UpdateImageRequest$DisplayName": "The new display name for the image.
" + } + }, + "ImageName": { + "base": null, + "refs": { + "CreateImageRequest$ImageName": "The name of the image. Must be unique to your account.
", + "CreateImageVersionRequest$ImageName": "The ImageName
of the Image
to create a version of.
The name of the CustomImage. Must be unique to your account.
", + "DeleteImageRequest$ImageName": "The name of the image to delete.
", + "DeleteImageVersionRequest$ImageName": "The name of the image.
", + "DescribeImageRequest$ImageName": "The name of the image to describe.
", + "DescribeImageResponse$ImageName": "The name of the image.
", + "DescribeImageVersionRequest$ImageName": "The name of the image.
", + "Image$ImageName": "The name of the image.
", + "ListImageVersionsRequest$ImageName": "The name of the image to list the versions of.
", + "UpdateImageRequest$ImageName": "The name of the image to update.
" + } + }, + "ImageNameContains": { + "base": null, + "refs": { + "ListImagesRequest$NameContains": "A filter that returns only images whose name contains the specified string.
" + } + }, + "ImageSortBy": { + "base": null, + "refs": { + "ListImagesRequest$SortBy": "The property used to sort results. The default value is CREATION_TIME
.
The sort order. The default value is DESCENDING
.
The status of the image.
", + "Image$ImageStatus": "The status of the image.
" + } + }, "ImageUri": { "base": null, "refs": { @@ -2928,6 +3213,62 @@ "MonitoringAppSpecification$ImageUri": "The container image to be run by the monitoring job.
" } }, + "ImageVersion": { + "base": "A version of a SageMaker Image
. A version represents an existing container image.
The Amazon Resource Name (ARN) of the image version.
", + "DescribeImageVersionResponse$ImageVersionArn": "The ARN of the version.
", + "ImageVersion$ImageVersionArn": "The ARN of the version.
", + "ResourceSpec$SageMakerImageVersionArn": "The ARN of the image version created on the instance.
" + } + }, + "ImageVersionNumber": { + "base": null, + "refs": { + "CustomImage$ImageVersionNumber": "The version number of the CustomImage.
", + "DeleteImageVersionRequest$Version": "The version to delete.
", + "DescribeImageVersionRequest$Version": "The version of the image. If not specified, the latest version is described.
", + "DescribeImageVersionResponse$Version": "The version number.
", + "ImageVersion$Version": "The version number.
" + } + }, + "ImageVersionSortBy": { + "base": null, + "refs": { + "ListImageVersionsRequest$SortBy": "The property used to sort results. The default value is CREATION_TIME
.
The sort order. The default value is DESCENDING
.
The status of the version.
", + "ImageVersion$ImageVersionStatus": "The status of the version.
" + } + }, + "ImageVersions": { + "base": null, + "refs": { + "ListImageVersionsResponse$ImageVersions": "A list of versions and their properties.
" + } + }, + "Images": { + "base": null, + "refs": { + "ListImagesResponse$Images": "A list of images and their properties.
" + } + }, "InferenceSpecification": { "base": "Defines how to perform inference generation after a training job is run.
", "refs": { @@ -3043,12 +3384,45 @@ "UserSettings$JupyterServerAppSettings": "The Jupyter server's app settings.
" } }, + "KernelDisplayName": { + "base": null, + "refs": { + "KernelSpec$DisplayName": "The display name of the kernel.
" + } + }, "KernelGatewayAppSettings": { - "base": "The kernel gateway app settings.
", + "base": "The KernelGateway app settings.
", "refs": { "UserSettings$KernelGatewayAppSettings": "The kernel gateway app settings.
" } }, + "KernelGatewayImageConfig": { + "base": "The configuration for an Amazon SageMaker KernelGateway app.
", + "refs": { + "AppImageConfigDetails$KernelGatewayImageConfig": "The KernelGateway app.
", + "CreateAppImageConfigRequest$KernelGatewayImageConfig": "The KernelGatewayImageConfig.
", + "DescribeAppImageConfigResponse$KernelGatewayImageConfig": "The KernelGateway app.
", + "UpdateAppImageConfigRequest$KernelGatewayImageConfig": "The new KernelGateway app to run on the image.
" + } + }, + "KernelName": { + "base": null, + "refs": { + "KernelSpec$Name": "The name of the kernel. Must be unique to your account.
" + } + }, + "KernelSpec": { + "base": "Defines how a kernel is started and the arguments, environment variables, and metadata that are available to the kernel.
", + "refs": { + "KernelSpecs$member": null + } + }, + "KernelSpecs": { + "base": null, + "refs": { + "KernelGatewayImageConfig$KernelSpecs": "Defines how a kernel is started and the arguments, environment variables, and metadata that are available to the kernel.
" + } + }, "KmsKeyId": { "base": null, "refs": { @@ -3278,6 +3652,16 @@ "refs": { } }, + "ListAppImageConfigsRequest": { + "base": null, + "refs": { + } + }, + "ListAppImageConfigsResponse": { + "base": null, + "refs": { + } + }, "ListAppsRequest": { "base": null, "refs": { @@ -3404,6 +3788,26 @@ "refs": { } }, + "ListImageVersionsRequest": { + "base": null, + "refs": { + } + }, + "ListImageVersionsResponse": { + "base": null, + "refs": { + } + }, + "ListImagesRequest": { + "base": null, + "refs": { + } + }, + "ListImagesResponse": { + "base": null, + "refs": { + } + }, "ListLabelingJobsForWorkteamRequest": { "base": null, "refs": { @@ -3690,6 +4094,7 @@ "base": null, "refs": { "ListAlgorithmsInput$MaxResults": "The maximum number of algorithms to return in the response.
", + "ListAppImageConfigsRequest$MaxResults": "The maximum number of AppImageConfigs to return in the response. The default value is 10.
", "ListAppsRequest$MaxResults": "Returns a list up to a specified limit.
", "ListCodeRepositoriesInput$MaxResults": "The maximum number of Git repositories to return in the response.
", "ListCompilationJobsRequest$MaxResults": "The maximum number of model compilation jobs to return in the response.
", @@ -3700,6 +4105,8 @@ "ListFlowDefinitionsRequest$MaxResults": "The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
will be provided in the output that you can use to resume pagination.
The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
will be provided in the output that you can use to resume pagination.
The maximum number of tuning jobs to return. The default value is 10.
", + "ListImageVersionsRequest$MaxResults": "The maximum number of versions to return in the response. The default value is 10.
", + "ListImagesRequest$MaxResults": "The maximum number of images to return in the response. The default value is 10.
", "ListLabelingJobsForWorkteamRequest$MaxResults": "The maximum number of labeling jobs to return in each page of the response.
", "ListLabelingJobsRequest$MaxResults": "The maximum number of labeling jobs to return in each page of the response.
", "ListModelPackagesInput$MaxResults": "The maximum number of model packages to return in the response.
", @@ -4128,6 +4535,12 @@ "MonitoringJobDefinition$StoppingCondition": "Specifies a time limit for how long the monitoring job is allowed to run.
" } }, + "MountPath": { + "base": null, + "refs": { + "FileSystemConfig$MountPath": "The path within the image to mount the user's EFS home directory. The directory should be empty. If not specified, defaults to /home/sagemaker-user.
" + } + }, "NameContains": { "base": null, "refs": { @@ -4173,6 +4586,8 @@ "refs": { "ListAlgorithmsInput$NextToken": "If the response to a previous ListAlgorithms
request was truncated, the response includes a NextToken
. To retrieve the next set of algorithms, use the token in the next request.
If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of algorithms, use it in the subsequent request.
", + "ListAppImageConfigsRequest$NextToken": "If the previous call to ListImages
didn't return the full set of AppImageConfigs, the call returns a token for getting the next set of AppImageConfigs.
A token for getting the next set of AppImageConfigs, if there are any.
", "ListAppsRequest$NextToken": "If the previous response was truncated, you will receive this token. Use it in your next request to receive the next set of results.
", "ListAppsResponse$NextToken": "If the previous response was truncated, you will receive this token. Use it in your next request to receive the next set of results.
", "ListAutoMLJobsRequest$NextToken": "If the previous response was truncated, you receive this token. Use it in your next request to receive the next set of results.
", @@ -4193,6 +4608,10 @@ "ListHumanTaskUisResponse$NextToken": "A token to resume pagination.
", "ListHyperParameterTuningJobsRequest$NextToken": "If the result of the previous ListHyperParameterTuningJobs
request was truncated, the response includes a NextToken
. To retrieve the next set of tuning jobs, use the token in the next request.
If the result of this ListHyperParameterTuningJobs
request was truncated, the response includes a NextToken
. To retrieve the next set of tuning jobs, use the token in the next request.
If the previous call to ListImageVersions
didn't return the full set of versions, the call returns a token for getting the next set of versions.
A token for getting the next set of versions, if there are any.
", + "ListImagesRequest$NextToken": "If the previous call to ListImages
didn't return the full set of images, the call returns a token for getting the next set of images.
A token for getting the next set of images, if there are any.
", "ListLabelingJobsForWorkteamRequest$NextToken": "If the result of the previous ListLabelingJobsForWorkteam
request was truncated, the response includes a NextToken
. To retrieve the next set of labeling jobs, use the token in the next request.
If the response is truncated, Amazon SageMaker returns this token. To retrieve the next set of labeling jobs, use it in the subsequent request.
", "ListLabelingJobsRequest$NextToken": "If the result of the previous ListLabelingJobs
request was truncated, the response includes a NextToken
. To retrieve the next set of labeling jobs, use the token in the next request.
The name of the categorical hyperparameter to tune.
", "ContinuousParameterRange$Name": "The name of the continuous hyperparameter to tune.
", - "HyperParameters$key": null, "IntegerParameterRange$Name": "The name of the hyperparameter to search.
" } }, @@ -4590,8 +5008,6 @@ "ContinuousParameterRange$MaxValue": "The maximum value for the hyperparameter. The tuning job uses floating-point values between MinValue
value and this value for tuning.
The minimum floating-point value allowed.
", "ContinuousParameterRangeSpecification$MaxValue": "The maximum floating-point value allowed.
", - "HyperParameterSpecification$DefaultValue": "The default value for this hyperparameter. If a default value is specified, a hyperparameter cannot be required.
", - "HyperParameters$value": null, "IntegerParameterRange$MinValue": "The minimum value of the hyperparameter to search.
", "IntegerParameterRange$MaxValue": "The maximum value of the hyperparameter to search.
", "IntegerParameterRangeSpecification$MinValue": "The minimum integer value allowed.
", @@ -5043,12 +5459,12 @@ } }, "ResourceSpec": { - "base": "The instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance. The ARN is stored as metadata in SageMaker Studio notebooks.
", + "base": "Specifies the ARN's of a SageMaker image and SageMaker image version, and the instance type that the version runs on.
", "refs": { "CreateAppRequest$ResourceSpec": "The instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.
", "DescribeAppResponse$ResourceSpec": "The instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.
", "JupyterServerAppSettings$DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.
", - "KernelGatewayAppSettings$DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.
", + "KernelGatewayAppSettings$DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the default SageMaker image used by the KernelGateway app.
", "TensorBoardAppSettings$DefaultResourceSpec": "The default instance type and the Amazon Resource Name (ARN) of the SageMaker image created on the instance.
" } }, @@ -5090,6 +5506,7 @@ "CreateAutoMLJobRequest$RoleArn": "The ARN of the role that is used to access the data.
", "CreateCompilationJobRequest$RoleArn": "The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.
During model compilation, Amazon SageMaker needs your permission to:
Read input data from an S3 bucket
Write model artifacts to an S3 bucket
Write logs to Amazon CloudWatch Logs
Publish metrics to Amazon CloudWatch
You grant permissions for all of these tasks to an IAM role. To pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission. For more information, see Amazon SageMaker Roles.
The Amazon Resource Name (ARN) of the role needed to call other services on your behalf. For example, arn:aws:iam::1234567890:role/service-role/AmazonSageMaker-ExecutionRole-20180111T151298
.
The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker to perform tasks on your behalf.
", "CreateLabelingJobRequest$RoleArn": "The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during data labeling. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete data labeling.
", "CreateModelInput$ExecutionRoleArn": "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access model artifacts and docker image for deployment on ML compute instances or for batch transform jobs. Deploying on ML compute instances is part of model hosting. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
When you send any requests to AWS resources from the notebook instance, Amazon SageMaker assumes this role to perform tasks on your behalf. You must grant this role necessary permissions so Amazon SageMaker can perform these tasks. The policy must allow the Amazon SageMaker service principal (sagemaker.amazonaws.com) permissions to assume this role. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that has read permission to the input data location and write permission to the output data location in Amazon S3.
", "DescribeCompilationJobResponse$RoleArn": "The Amazon Resource Name (ARN) of the model compilation job.
", "DescribeFlowDefinitionResponse$RoleArn": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) execution role for the flow definition.
", + "DescribeImageResponse$RoleArn": "The Amazon Resource Name (ARN) of the IAM role that enables Amazon SageMaker to perform tasks on your behalf.
", "DescribeLabelingJobResponse$RoleArn": "The Amazon Resource Name (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during data labeling.
", "DescribeModelOutput$ExecutionRoleArn": "The Amazon Resource Name (ARN) of the IAM role that you specified for the model.
", "DescribeNotebookInstanceOutput$RoleArn": "The Amazon Resource Name (ARN) of the IAM role associated with the instance.
", @@ -5109,6 +5527,7 @@ "ProcessingJob$RoleArn": "The ARN of the role used to create the processing job.
", "RenderUiTemplateRequest$RoleArn": "The Amazon Resource Name (ARN) that has access to the S3 objects that are used by the template.
", "TrainingJob$RoleArn": "The AWS Identity and Access Management (IAM) role configured for the training job.
", + "UpdateImageRequest$RoleArn": "The new Amazon Resource Name (ARN) for the IAM role that enables Amazon SageMaker to perform tasks on your behalf.
", "UpdateNotebookInstanceInput$RoleArn": "The Amazon Resource Name (ARN) of the IAM role that Amazon SageMaker can assume to access the notebook instance. For more information, see Amazon SageMaker Roles.
To be able to pass this role to Amazon SageMaker, the caller of this API must have the iam:PassRole
permission.
The execution role for the user.
" } @@ -5165,7 +5584,7 @@ "AutoMLOutputDataConfig$S3OutputPath": "The Amazon S3 output path. Must be 128 characters or less.
", "AutoMLS3DataSource$S3Uri": "The URL to the Amazon S3 data source.
", "CheckpointConfig$S3Uri": "Identifies the S3 path where you want Amazon SageMaker to store checkpoints. For example, s3://bucket-name/key-name-prefix
.
The S3 URL of the file that defines the categories used to label the data objects.
For 3D point cloud task types, see Create a Labeling Category Configuration File for 3D Point Cloud Labeling Jobs.
For all other built-in task types and custom tasks, your label category configuration file must be a JSON file in the following format. Identify the labels you want to use by replacing label_1
, label_2
,...
,label_n
with your label categories.
{
\"document-version\": \"2018-11-28\"
\"labels\": [
{
\"label\": \"label_1\"
},
{
\"label\": \"label_2\"
},
...
{
\"label\": \"label_n\"
}
]
}
The S3 URI of the file that defines the categories used to label the data objects.
For 3D point cloud task types, see Create a Labeling Category Configuration File for 3D Point Cloud Labeling Jobs.
For all other built-in task types and custom tasks, your label category configuration file must be a JSON file in the following format. Identify the labels you want to use by replacing label_1
, label_2
,...
,label_n
with your label categories.
{
\"document-version\": \"2018-11-28\"
\"labels\": [
{
\"label\": \"label_1\"
},
{
\"label\": \"label_2\"
},
...
{
\"label\": \"label_n\"
}
]
}
Path to Amazon S3 storage location for tensors.
", "DebugRuleConfiguration$S3OutputPath": "Path to Amazon S3 storage location for rules.
", "DescribeLabelingJobResponse$LabelCategoryConfigS3Uri": "The S3 location of the JSON file that defines the categories used to label data objects. Please note the following label-category limits:
Semantic segmentation labeling jobs using automated labeling: 20 labels
Box bounding labeling jobs (all): 10 labels
The file is a JSON structure in the following format:
{
\"document-version\": \"2018-11-28\"
\"labels\": [
{
\"label\": \"label 1\"
},
{
\"label\": \"label 2\"
},
...
{
\"label\": \"label n\"
}
]
}
The VPC security group IDs, in the form sg-xxxxxxxx. The security groups must be for the same VPC as specified in the subnet.
", "DescribeNotebookInstanceOutput$SecurityGroups": "The IDs of the VPC security groups.
", - "UserSettings$SecurityGroups": "The security groups.
" + "UserSettings$SecurityGroups": "The security groups for the Amazon Virtual Private Cloud (VPC) that Studio uses for communication.
Optional when the CreateDomain.AppNetworkAccessType
parameter is set to PublicInternetOnly
.
Required when the CreateDomain.AppNetworkAccessType
parameter is set to VpcOnly
.
The sort order for the results. The default is Ascending
.
The sort order. The default value is Descending
.
The sort order for the results. The default is Ascending.
", "ListCompilationJobsRequest$SortOrder": "The sort order for results. The default is Ascending
.
The sort order. The default value is Descending
.
An array of Tag
objects. Each tag is a key-value pair. Only the key
parameter is required. If you don't specify a value, Amazon SageMaker sets the value to an empty string.
A list of tags associated with the Amazon SageMaker resource.
", + "CreateAppImageConfigRequest$Tags": "A list of tags to apply to the AppImageConfig.
", "CreateAppRequest$Tags": "Each tag consists of a key and an optional value. Tag keys must be unique per resource.
", "CreateAutoMLJobRequest$Tags": "Each tag consists of a key and an optional value. Tag keys must be unique per resource.
", + "CreateCompilationJobRequest$Tags": "An array of key-value pairs that you want to use to organize and track your AWS resource costs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateDomainRequest$Tags": "Tags to associated with the Domain. Each tag consists of a key and an optional value. Tag keys must be unique per resource. Tags are searchable using the Search API.
", "CreateEndpointConfigInput$Tags": "A list of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateEndpointInput$Tags": "An array of key-value pairs. For more information, see Using Cost Allocation Tagsin the AWS Billing and Cost Management User Guide.
", @@ -5642,6 +6064,7 @@ "CreateFlowDefinitionRequest$Tags": "An array of key-value pairs that contain metadata to help you categorize and organize a flow definition. Each tag consists of a key and a value, both of which you define.
", "CreateHumanTaskUiRequest$Tags": "An array of key-value pairs that contain metadata to help you categorize and organize a human review workflow user interface. Each tag consists of a key and a value, both of which you define.
", "CreateHyperParameterTuningJobRequest$Tags": "An array of key-value pairs. You can use tags to categorize your AWS resources in different ways, for example, by purpose, owner, or environment. For more information, see AWS Tagging Strategies.
Tags that you specify for the tuning job are also added to all training jobs that the tuning job launches.
", + "CreateImageRequest$Tags": "A list of tags to apply to the image.
", "CreateLabelingJobRequest$Tags": "An array of key/value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateModelInput$Tags": "An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", "CreateMonitoringScheduleRequest$Tags": "(Optional) An array of key-value pairs. For more information, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
", @@ -5808,6 +6231,8 @@ "Timestamp": { "base": null, "refs": { + "AppImageConfigDetails$CreationTime": "When the AppImageConfig was created.
", + "AppImageConfigDetails$LastModifiedTime": "When the AppImageConfig was last modified.
", "AutoMLCandidate$CreationTime": "The creation time.
", "AutoMLCandidate$EndTime": "The end time.
", "AutoMLCandidate$LastModifiedTime": "The last modified time.
", @@ -5820,6 +6245,8 @@ "CreateTrialComponentRequest$EndTime": "When the component ended.
", "DebugRuleEvaluationStatus$LastModifiedTime": "Timestamp when the rule evaluation status was last modified.
", "DeployedImage$ResolutionTime": "The date and time when the image path for the model resolved to the ResolvedImage
When the AppImageConfig was created.
", + "DescribeAppImageConfigResponse$LastModifiedTime": "When the AppImageConfig was last modified.
", "DescribeAppResponse$LastHealthCheckTimestamp": "The timestamp of the last health check.
", "DescribeAppResponse$LastUserActivityTimestamp": "The timestamp of the last user's activity.
", "DescribeAutoMLJobResponse$CreationTime": "Returns the job's creation time.
", @@ -5837,6 +6264,10 @@ "DescribeHyperParameterTuningJobResponse$CreationTime": "The date and time that the tuning job started.
", "DescribeHyperParameterTuningJobResponse$HyperParameterTuningEndTime": "The date and time that the tuning job ended.
", "DescribeHyperParameterTuningJobResponse$LastModifiedTime": "The date and time that the status of the tuning job was modified.
", + "DescribeImageResponse$CreationTime": "When the image was created.
", + "DescribeImageResponse$LastModifiedTime": "When the image was last modified.
", + "DescribeImageVersionResponse$CreationTime": "When the version was created.
", + "DescribeImageVersionResponse$LastModifiedTime": "When the version was last modified.
", "DescribeLabelingJobResponse$CreationTime": "The date and time that the labeling job was created.
", "DescribeLabelingJobResponse$LastModifiedTime": "The date and time that the labeling job was last updated.
", "DescribeModelOutput$CreationTime": "A timestamp that shows when the model was created.
", @@ -5874,9 +6305,17 @@ "HyperParameterTuningJobSummary$CreationTime": "The date and time that the tuning job was created.
", "HyperParameterTuningJobSummary$HyperParameterTuningEndTime": "The date and time that the tuning job ended.
", "HyperParameterTuningJobSummary$LastModifiedTime": "The date and time that the tuning job was modified.
", + "Image$CreationTime": "When the image was created.
", + "Image$LastModifiedTime": "When the image was last modified.
", + "ImageVersion$CreationTime": "When the version was created.
", + "ImageVersion$LastModifiedTime": "When the version was last modified.
", "LabelingJobForWorkteamSummary$CreationTime": "The date and time that the labeling job was created.
", "LabelingJobSummary$CreationTime": "The date and time that the job was created (timestamp).
", "LabelingJobSummary$LastModifiedTime": "The date and time that the job was last modified (timestamp).
", + "ListAppImageConfigsRequest$CreationTimeBefore": "A filter that returns only AppImageConfigs created on or before the specified time.
", + "ListAppImageConfigsRequest$CreationTimeAfter": "A filter that returns only AppImageConfigs created on or after the specified time.
", + "ListAppImageConfigsRequest$ModifiedTimeBefore": "A filter that returns only AppImageConfigs modified on or before the specified time.
", + "ListAppImageConfigsRequest$ModifiedTimeAfter": "A filter that returns only AppImageConfigs modified on or after the specified time.
", "ListAutoMLJobsRequest$CreationTimeAfter": "Request a list of jobs, using a filter for time.
", "ListAutoMLJobsRequest$CreationTimeBefore": "Request a list of jobs, using a filter for time.
", "ListAutoMLJobsRequest$LastModifiedTimeAfter": "Request a list of jobs, using a filter for time.
", @@ -5899,6 +6338,14 @@ "ListHyperParameterTuningJobsRequest$CreationTimeBefore": "A filter that returns only tuning jobs that were created before the specified time.
", "ListHyperParameterTuningJobsRequest$LastModifiedTimeAfter": "A filter that returns only tuning jobs that were modified after the specified time.
", "ListHyperParameterTuningJobsRequest$LastModifiedTimeBefore": "A filter that returns only tuning jobs that were modified before the specified time.
", + "ListImageVersionsRequest$CreationTimeAfter": "A filter that returns only versions created on or after the specified time.
", + "ListImageVersionsRequest$CreationTimeBefore": "A filter that returns only versions created on or before the specified time.
", + "ListImageVersionsRequest$LastModifiedTimeAfter": "A filter that returns only versions modified on or after the specified time.
", + "ListImageVersionsRequest$LastModifiedTimeBefore": "A filter that returns only versions modified on or before the specified time.
", + "ListImagesRequest$CreationTimeAfter": "A filter that returns only images created on or after the specified time.
", + "ListImagesRequest$CreationTimeBefore": "A filter that returns only images created on or before the specified time.
", + "ListImagesRequest$LastModifiedTimeAfter": "A filter that returns only images modified on or after the specified time.
", + "ListImagesRequest$LastModifiedTimeBefore": "A filter that returns only images modified on or before the specified time.
", "ListLabelingJobsForWorkteamRequest$CreationTimeAfter": "A filter that returns only labeling jobs created after the specified time (timestamp).
", "ListLabelingJobsForWorkteamRequest$CreationTimeBefore": "A filter that returns only labeling jobs created before the specified time (timestamp).
", "ListLabelingJobsRequest$CreationTimeAfter": "A filter that returns only labeling jobs created after the specified time (timestamp).
", @@ -6480,6 +6927,16 @@ "DescribeHumanTaskUiResponse$UiTemplate": null } }, + "UpdateAppImageConfigRequest": { + "base": null, + "refs": { + } + }, + "UpdateAppImageConfigResponse": { + "base": null, + "refs": { + } + }, "UpdateCodeRepositoryInput": { "base": null, "refs": { @@ -6530,6 +6987,16 @@ "refs": { } }, + "UpdateImageRequest": { + "base": null, + "refs": { + } + }, + "UpdateImageResponse": { + "base": null, + "refs": { + } + }, "UpdateMonitoringScheduleRequest": { "base": null, "refs": { @@ -6737,6 +7204,14 @@ "ProductionVariantSummary$DesiredWeight": "The requested weight, as specified in the UpdateEndpointWeightsAndCapacities
request.
The name or Amazon Resource Name (ARN) of the model package to use to create the model.
", + "DeleteModelPackageInput$ModelPackageName": "The name of the model package. The name must have 1 to 63 characters. Valid characters are a-z, A-Z, 0-9, and - (hyphen).
", + "DescribeModelPackageInput$ModelPackageName": "The name of the model package to describe.
" + } + }, "VolumeSizeInGB": { "base": null, "refs": { diff --git a/models/apis/sagemaker/2017-07-24/paginators-1.json b/models/apis/sagemaker/2017-07-24/paginators-1.json index 7f87ca77493..e4cb62d8038 100644 --- a/models/apis/sagemaker/2017-07-24/paginators-1.json +++ b/models/apis/sagemaker/2017-07-24/paginators-1.json @@ -78,6 +78,18 @@ "limit_key": "MaxResults", "result_key": "HyperParameterTuningJobSummaries" }, + "ListImageVersions": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults", + "result_key": "ImageVersions" + }, + "ListImages": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults", + "result_key": "Images" + }, "ListLabelingJobs": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/endpoints/endpoints.json b/models/endpoints/endpoints.json index 728bd1fc212..8fcd3e500f1 100644 --- a/models/endpoints/endpoints.json +++ b/models/endpoints/endpoints.json @@ -4632,6 +4632,12 @@ "region" : "us-east-1" }, "hostname" : "route53.amazonaws.com" + }, + "fips-aws-global" : { + "credentialScope" : { + "region" : "us-east-1" + }, + "hostname" : "route53-fips.amazonaws.com" } }, "isRegionalized" : false, @@ -8263,6 +8269,12 @@ "region" : "us-gov-west-1" }, "hostname" : "route53.us-gov.amazonaws.com" + }, + "fips-aws-us-gov-global" : { + "credentialScope" : { + "region" : "us-gov-west-1" + }, + "hostname" : "route53.us-gov.amazonaws.com" } }, "isRegionalized" : false, diff --git a/service/kendra/api.go b/service/kendra/api.go index bdd1b16c181..0c9cb2f8a60 100644 --- a/service/kendra/api.go +++ b/service/kendra/api.go @@ -3337,6 +3337,721 @@ func (s *ConflictException) RequestID() string { return s.RespMetadata.RequestID } +// Specifies the attachment settings for the Confluence data source. Attachment +// settings are optional, if you don't specify settings attachments, Amazon +// Kendra won't index them. +type ConfluenceAttachmentConfiguration struct { + _ struct{} `type:"structure"` + + // Defines how attachment metadata fields should be mapped to index fields. + // Before you can map a field, you must first create an index field with a matching + // type using the console or the UpdateIndex operation. + // + // If you specify the AttachentFieldMappings parameter, you must specify at + // least one field mapping. + AttachmentFieldMappings []*ConfluenceAttachmentToIndexFieldMapping `min:"1" type:"list"` + + // Indicates whether Amazon Kendra indexes attachments to the pages and blogs + // in the Confluence data source. + CrawlAttachments *bool `type:"boolean"` +} + +// String returns the string representation +func (s ConfluenceAttachmentConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceAttachmentConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceAttachmentConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceAttachmentConfiguration"} + if s.AttachmentFieldMappings != nil && len(s.AttachmentFieldMappings) < 1 { + invalidParams.Add(request.NewErrParamMinLen("AttachmentFieldMappings", 1)) + } + if s.AttachmentFieldMappings != nil { + for i, v := range s.AttachmentFieldMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "AttachmentFieldMappings", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttachmentFieldMappings sets the AttachmentFieldMappings field's value. +func (s *ConfluenceAttachmentConfiguration) SetAttachmentFieldMappings(v []*ConfluenceAttachmentToIndexFieldMapping) *ConfluenceAttachmentConfiguration { + s.AttachmentFieldMappings = v + return s +} + +// SetCrawlAttachments sets the CrawlAttachments field's value. +func (s *ConfluenceAttachmentConfiguration) SetCrawlAttachments(v bool) *ConfluenceAttachmentConfiguration { + s.CrawlAttachments = &v + return s +} + +// Defines the mapping between a field in the Confluence data source to a Amazon +// Kendra index field. +// +// You must first create the index field using the operation. +type ConfluenceAttachmentToIndexFieldMapping struct { + _ struct{} `type:"structure"` + + // The name of the field in the data source. + // + // You must first create the index field using the operation. + DataSourceFieldName *string `type:"string" enum:"ConfluenceAttachmentFieldName"` + + // The format for date fields in the data source. If the field specified in + // DataSourceFieldName is a date field you must specify the date format. If + // the field is not a date field, an exception is thrown. + DateFieldFormat *string `min:"4" type:"string"` + + // The name of the index field to map to the Confluence data source field. The + // index field type must match the Confluence field type. + IndexFieldName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfluenceAttachmentToIndexFieldMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceAttachmentToIndexFieldMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceAttachmentToIndexFieldMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceAttachmentToIndexFieldMapping"} + if s.DateFieldFormat != nil && len(*s.DateFieldFormat) < 4 { + invalidParams.Add(request.NewErrParamMinLen("DateFieldFormat", 4)) + } + if s.IndexFieldName != nil && len(*s.IndexFieldName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexFieldName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDataSourceFieldName sets the DataSourceFieldName field's value. +func (s *ConfluenceAttachmentToIndexFieldMapping) SetDataSourceFieldName(v string) *ConfluenceAttachmentToIndexFieldMapping { + s.DataSourceFieldName = &v + return s +} + +// SetDateFieldFormat sets the DateFieldFormat field's value. +func (s *ConfluenceAttachmentToIndexFieldMapping) SetDateFieldFormat(v string) *ConfluenceAttachmentToIndexFieldMapping { + s.DateFieldFormat = &v + return s +} + +// SetIndexFieldName sets the IndexFieldName field's value. +func (s *ConfluenceAttachmentToIndexFieldMapping) SetIndexFieldName(v string) *ConfluenceAttachmentToIndexFieldMapping { + s.IndexFieldName = &v + return s +} + +// Specifies the blog settings for the Confluence data source. Blogs are always +// indexed unless filtered from the index by the ExclusionPatterns or InclusionPatterns +// fields in the data type. +type ConfluenceBlogConfiguration struct { + _ struct{} `type:"structure"` + + // Defines how blog metadata fields should be mapped to index fields. Before + // you can map a field, you must first create an index field with a matching + // type using the console or the UpdateIndex operation. + // + // If you specify the BlogFieldMappings parameter, you must specify at least + // one field mapping. + BlogFieldMappings []*ConfluenceBlogToIndexFieldMapping `min:"1" type:"list"` +} + +// String returns the string representation +func (s ConfluenceBlogConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceBlogConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceBlogConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceBlogConfiguration"} + if s.BlogFieldMappings != nil && len(s.BlogFieldMappings) < 1 { + invalidParams.Add(request.NewErrParamMinLen("BlogFieldMappings", 1)) + } + if s.BlogFieldMappings != nil { + for i, v := range s.BlogFieldMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "BlogFieldMappings", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetBlogFieldMappings sets the BlogFieldMappings field's value. +func (s *ConfluenceBlogConfiguration) SetBlogFieldMappings(v []*ConfluenceBlogToIndexFieldMapping) *ConfluenceBlogConfiguration { + s.BlogFieldMappings = v + return s +} + +// Defines the mapping between a blog field in the Confluence data source to +// a Amazon Kendra index field. +// +// You must first create the index field using the operation. +type ConfluenceBlogToIndexFieldMapping struct { + _ struct{} `type:"structure"` + + // The name of the field in the data source. + DataSourceFieldName *string `type:"string" enum:"ConfluenceBlogFieldName"` + + // The format for date fields in the data source. If the field specified in + // DataSourceFieldName is a date field you must specify the date format. If + // the field is not a date field, an exception is thrown. + DateFieldFormat *string `min:"4" type:"string"` + + // The name of the index field to map to the Confluence data source field. The + // index field type must match the Confluence field type. + IndexFieldName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfluenceBlogToIndexFieldMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceBlogToIndexFieldMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceBlogToIndexFieldMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceBlogToIndexFieldMapping"} + if s.DateFieldFormat != nil && len(*s.DateFieldFormat) < 4 { + invalidParams.Add(request.NewErrParamMinLen("DateFieldFormat", 4)) + } + if s.IndexFieldName != nil && len(*s.IndexFieldName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexFieldName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDataSourceFieldName sets the DataSourceFieldName field's value. +func (s *ConfluenceBlogToIndexFieldMapping) SetDataSourceFieldName(v string) *ConfluenceBlogToIndexFieldMapping { + s.DataSourceFieldName = &v + return s +} + +// SetDateFieldFormat sets the DateFieldFormat field's value. +func (s *ConfluenceBlogToIndexFieldMapping) SetDateFieldFormat(v string) *ConfluenceBlogToIndexFieldMapping { + s.DateFieldFormat = &v + return s +} + +// SetIndexFieldName sets the IndexFieldName field's value. +func (s *ConfluenceBlogToIndexFieldMapping) SetIndexFieldName(v string) *ConfluenceBlogToIndexFieldMapping { + s.IndexFieldName = &v + return s +} + +// Provides configuration information for data sources that connect to Confluence. +type ConfluenceConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies configuration information for indexing attachments to Confluence + // blogs and pages. + AttachmentConfiguration *ConfluenceAttachmentConfiguration `type:"structure"` + + // Specifies configuration information for indexing Confluence blogs. + BlogConfiguration *ConfluenceBlogConfiguration `type:"structure"` + + // A list of regular expression patterns that apply to a URL on the Confluence + // server. An exclusion pattern can apply to a blog post, a page, a space, or + // an attachment. Items that match the pattern are excluded from the index. + // Items that don't match the pattern are included in the index. If a item matches + // both an exclusion pattern and an inclusion pattern, the item isn't included + // in the index. + ExclusionPatterns []*string `type:"list"` + + // A list of regular expression patterns that apply to a URL on the Confluence + // server. An inclusion pattern can apply to a blog post, a page, a space, or + // an attachment. Items that match the patterns are included in the index. Items + // that don't match the pattern are excluded from the index. If an item matches + // both an inclusion pattern and an exclusion pattern, the item isn't included + // in the index. + InclusionPatterns []*string `type:"list"` + + // Specifies configuration information for indexing Confluence pages. + PageConfiguration *ConfluencePageConfiguration `type:"structure"` + + // The Amazon Resource Name (ARN) of an AWS Secrets Manager secret that contains + // the key/value pairs required to connect to your Confluence server. The secret + // must contain a JSON structure with the following keys: + // + // * username - The user name of a user with administrative privileges for + // the Confluence server. + // + // * password - The password associated with the user logging in to the Confluence + // server. + // + // SecretArn is a required field + SecretArn *string `min:"1" type:"string" required:"true"` + + // The URL of your Confluence instance. Use the full URL of the server. For + // example, https://server.example.com:port/. You can also use an IP address, + // for example, https://192.168.1.113/. + // + // ServerUrl is a required field + ServerUrl *string `min:"1" type:"string" required:"true"` + + // Specifies configuration information for indexing Confluence spaces. + SpaceConfiguration *ConfluenceSpaceConfiguration `type:"structure"` + + // Specifies the version of the Confluence installation that you are connecting + // to. + // + // Version is a required field + Version *string `type:"string" required:"true" enum:"ConfluenceVersion"` + + // Specifies the information for connecting to an Amazon VPC. + VpcConfiguration *DataSourceVpcConfiguration `type:"structure"` +} + +// String returns the string representation +func (s ConfluenceConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceConfiguration"} + if s.SecretArn == nil { + invalidParams.Add(request.NewErrParamRequired("SecretArn")) + } + if s.SecretArn != nil && len(*s.SecretArn) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SecretArn", 1)) + } + if s.ServerUrl == nil { + invalidParams.Add(request.NewErrParamRequired("ServerUrl")) + } + if s.ServerUrl != nil && len(*s.ServerUrl) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ServerUrl", 1)) + } + if s.Version == nil { + invalidParams.Add(request.NewErrParamRequired("Version")) + } + if s.AttachmentConfiguration != nil { + if err := s.AttachmentConfiguration.Validate(); err != nil { + invalidParams.AddNested("AttachmentConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.BlogConfiguration != nil { + if err := s.BlogConfiguration.Validate(); err != nil { + invalidParams.AddNested("BlogConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.PageConfiguration != nil { + if err := s.PageConfiguration.Validate(); err != nil { + invalidParams.AddNested("PageConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.SpaceConfiguration != nil { + if err := s.SpaceConfiguration.Validate(); err != nil { + invalidParams.AddNested("SpaceConfiguration", err.(request.ErrInvalidParams)) + } + } + if s.VpcConfiguration != nil { + if err := s.VpcConfiguration.Validate(); err != nil { + invalidParams.AddNested("VpcConfiguration", err.(request.ErrInvalidParams)) + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAttachmentConfiguration sets the AttachmentConfiguration field's value. +func (s *ConfluenceConfiguration) SetAttachmentConfiguration(v *ConfluenceAttachmentConfiguration) *ConfluenceConfiguration { + s.AttachmentConfiguration = v + return s +} + +// SetBlogConfiguration sets the BlogConfiguration field's value. +func (s *ConfluenceConfiguration) SetBlogConfiguration(v *ConfluenceBlogConfiguration) *ConfluenceConfiguration { + s.BlogConfiguration = v + return s +} + +// SetExclusionPatterns sets the ExclusionPatterns field's value. +func (s *ConfluenceConfiguration) SetExclusionPatterns(v []*string) *ConfluenceConfiguration { + s.ExclusionPatterns = v + return s +} + +// SetInclusionPatterns sets the InclusionPatterns field's value. +func (s *ConfluenceConfiguration) SetInclusionPatterns(v []*string) *ConfluenceConfiguration { + s.InclusionPatterns = v + return s +} + +// SetPageConfiguration sets the PageConfiguration field's value. +func (s *ConfluenceConfiguration) SetPageConfiguration(v *ConfluencePageConfiguration) *ConfluenceConfiguration { + s.PageConfiguration = v + return s +} + +// SetSecretArn sets the SecretArn field's value. +func (s *ConfluenceConfiguration) SetSecretArn(v string) *ConfluenceConfiguration { + s.SecretArn = &v + return s +} + +// SetServerUrl sets the ServerUrl field's value. +func (s *ConfluenceConfiguration) SetServerUrl(v string) *ConfluenceConfiguration { + s.ServerUrl = &v + return s +} + +// SetSpaceConfiguration sets the SpaceConfiguration field's value. +func (s *ConfluenceConfiguration) SetSpaceConfiguration(v *ConfluenceSpaceConfiguration) *ConfluenceConfiguration { + s.SpaceConfiguration = v + return s +} + +// SetVersion sets the Version field's value. +func (s *ConfluenceConfiguration) SetVersion(v string) *ConfluenceConfiguration { + s.Version = &v + return s +} + +// SetVpcConfiguration sets the VpcConfiguration field's value. +func (s *ConfluenceConfiguration) SetVpcConfiguration(v *DataSourceVpcConfiguration) *ConfluenceConfiguration { + s.VpcConfiguration = v + return s +} + +// Specifies the page settings for the Confluence data source. +type ConfluencePageConfiguration struct { + _ struct{} `type:"structure"` + + // Defines how page metadata fields should be mapped to index fields. Before + // you can map a field, you must first create an index field with a matching + // type using the console or the UpdateIndex operation. + // + // If you specify the PageFieldMappings parameter, you must specify at least + // one field mapping. + PageFieldMappings []*ConfluencePageToIndexFieldMapping `min:"1" type:"list"` +} + +// String returns the string representation +func (s ConfluencePageConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluencePageConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluencePageConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluencePageConfiguration"} + if s.PageFieldMappings != nil && len(s.PageFieldMappings) < 1 { + invalidParams.Add(request.NewErrParamMinLen("PageFieldMappings", 1)) + } + if s.PageFieldMappings != nil { + for i, v := range s.PageFieldMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "PageFieldMappings", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetPageFieldMappings sets the PageFieldMappings field's value. +func (s *ConfluencePageConfiguration) SetPageFieldMappings(v []*ConfluencePageToIndexFieldMapping) *ConfluencePageConfiguration { + s.PageFieldMappings = v + return s +} + +// Defines the mapping between a field in the Confluence data source to a Amazon +// Kendra index field. +// +// You must first create the index field using the operation. +type ConfluencePageToIndexFieldMapping struct { + _ struct{} `type:"structure"` + + // The name of the field in the data source. + DataSourceFieldName *string `type:"string" enum:"ConfluencePageFieldName"` + + // The format for date fields in the data source. If the field specified in + // DataSourceFieldName is a date field you must specify the date format. If + // the field is not a date field, an exception is thrown. + DateFieldFormat *string `min:"4" type:"string"` + + // The name of the index field to map to the Confluence data source field. The + // index field type must match the Confluence field type. + IndexFieldName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfluencePageToIndexFieldMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluencePageToIndexFieldMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluencePageToIndexFieldMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluencePageToIndexFieldMapping"} + if s.DateFieldFormat != nil && len(*s.DateFieldFormat) < 4 { + invalidParams.Add(request.NewErrParamMinLen("DateFieldFormat", 4)) + } + if s.IndexFieldName != nil && len(*s.IndexFieldName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexFieldName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDataSourceFieldName sets the DataSourceFieldName field's value. +func (s *ConfluencePageToIndexFieldMapping) SetDataSourceFieldName(v string) *ConfluencePageToIndexFieldMapping { + s.DataSourceFieldName = &v + return s +} + +// SetDateFieldFormat sets the DateFieldFormat field's value. +func (s *ConfluencePageToIndexFieldMapping) SetDateFieldFormat(v string) *ConfluencePageToIndexFieldMapping { + s.DateFieldFormat = &v + return s +} + +// SetIndexFieldName sets the IndexFieldName field's value. +func (s *ConfluencePageToIndexFieldMapping) SetIndexFieldName(v string) *ConfluencePageToIndexFieldMapping { + s.IndexFieldName = &v + return s +} + +// Specifies the configuration for indexing Confluence spaces. +type ConfluenceSpaceConfiguration struct { + _ struct{} `type:"structure"` + + // Specifies whether Amazon Kendra should index archived spaces. + CrawlArchivedSpaces *bool `type:"boolean"` + + // Specifies whether Amazon Kendra should index personal spaces. Users can add + // restrictions to items in personal spaces. If personal spaces are indexed, + // queries without user context information may return restricted items from + // a personal space in their results. For more information, see Filtering on + // user context (https://docs.aws.amazon.com/kendra/latest/dg/user-context-filter.html). + CrawlPersonalSpaces *bool `type:"boolean"` + + // A list of space keys of Confluence spaces. If you include a key, the blogs, + // documents, and attachments in the space are not indexed. If a space is in + // both the ExcludeSpaces and the IncludeSpaces list, the space is excluded. + ExcludeSpaces []*string `min:"1" type:"list"` + + // A list of space keys for Confluence spaces. If you include a key, the blogs, + // documents, and attachments in the space are indexed. Spaces that aren't in + // the list aren't indexed. A space in the list must exist. Otherwise, Amazon + // Kendra logs an error when the data source is synchronized. If a space is + // in both the IncludeSpaces and the ExcludeSpaces list, the space is excluded. + IncludeSpaces []*string `min:"1" type:"list"` + + // Defines how space metadata fields should be mapped to index fields. Before + // you can map a field, you must first create an index field with a matching + // type using the console or the UpdateIndex operation. + // + // If you specify the SpaceFieldMappings parameter, you must specify at least + // one field mapping. + SpaceFieldMappings []*ConfluenceSpaceToIndexFieldMapping `min:"1" type:"list"` +} + +// String returns the string representation +func (s ConfluenceSpaceConfiguration) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceSpaceConfiguration) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceSpaceConfiguration) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceSpaceConfiguration"} + if s.ExcludeSpaces != nil && len(s.ExcludeSpaces) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ExcludeSpaces", 1)) + } + if s.IncludeSpaces != nil && len(s.IncludeSpaces) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IncludeSpaces", 1)) + } + if s.SpaceFieldMappings != nil && len(s.SpaceFieldMappings) < 1 { + invalidParams.Add(request.NewErrParamMinLen("SpaceFieldMappings", 1)) + } + if s.SpaceFieldMappings != nil { + for i, v := range s.SpaceFieldMappings { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "SpaceFieldMappings", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetCrawlArchivedSpaces sets the CrawlArchivedSpaces field's value. +func (s *ConfluenceSpaceConfiguration) SetCrawlArchivedSpaces(v bool) *ConfluenceSpaceConfiguration { + s.CrawlArchivedSpaces = &v + return s +} + +// SetCrawlPersonalSpaces sets the CrawlPersonalSpaces field's value. +func (s *ConfluenceSpaceConfiguration) SetCrawlPersonalSpaces(v bool) *ConfluenceSpaceConfiguration { + s.CrawlPersonalSpaces = &v + return s +} + +// SetExcludeSpaces sets the ExcludeSpaces field's value. +func (s *ConfluenceSpaceConfiguration) SetExcludeSpaces(v []*string) *ConfluenceSpaceConfiguration { + s.ExcludeSpaces = v + return s +} + +// SetIncludeSpaces sets the IncludeSpaces field's value. +func (s *ConfluenceSpaceConfiguration) SetIncludeSpaces(v []*string) *ConfluenceSpaceConfiguration { + s.IncludeSpaces = v + return s +} + +// SetSpaceFieldMappings sets the SpaceFieldMappings field's value. +func (s *ConfluenceSpaceConfiguration) SetSpaceFieldMappings(v []*ConfluenceSpaceToIndexFieldMapping) *ConfluenceSpaceConfiguration { + s.SpaceFieldMappings = v + return s +} + +// Defines the mapping between a field in the Confluence data source to a Amazon +// Kendra index field. +// +// You must first create the index field using the operation. +type ConfluenceSpaceToIndexFieldMapping struct { + _ struct{} `type:"structure"` + + // The name of the field in the data source. + DataSourceFieldName *string `type:"string" enum:"ConfluenceSpaceFieldName"` + + // The format for date fields in the data source. If the field specified in + // DataSourceFieldName is a date field you must specify the date format. If + // the field is not a date field, an exception is thrown. + DateFieldFormat *string `min:"4" type:"string"` + + // The name of the index field to map to the Confluence data source field. The + // index field type must match the Confluence field type. + IndexFieldName *string `min:"1" type:"string"` +} + +// String returns the string representation +func (s ConfluenceSpaceToIndexFieldMapping) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ConfluenceSpaceToIndexFieldMapping) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ConfluenceSpaceToIndexFieldMapping) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ConfluenceSpaceToIndexFieldMapping"} + if s.DateFieldFormat != nil && len(*s.DateFieldFormat) < 4 { + invalidParams.Add(request.NewErrParamMinLen("DateFieldFormat", 4)) + } + if s.IndexFieldName != nil && len(*s.IndexFieldName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("IndexFieldName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDataSourceFieldName sets the DataSourceFieldName field's value. +func (s *ConfluenceSpaceToIndexFieldMapping) SetDataSourceFieldName(v string) *ConfluenceSpaceToIndexFieldMapping { + s.DataSourceFieldName = &v + return s +} + +// SetDateFieldFormat sets the DateFieldFormat field's value. +func (s *ConfluenceSpaceToIndexFieldMapping) SetDateFieldFormat(v string) *ConfluenceSpaceToIndexFieldMapping { + s.DateFieldFormat = &v + return s +} + +// SetIndexFieldName sets the IndexFieldName field's value. +func (s *ConfluenceSpaceToIndexFieldMapping) SetIndexFieldName(v string) *ConfluenceSpaceToIndexFieldMapping { + s.IndexFieldName = &v + return s +} + // Provides the information necessary to connect to a database. type ConnectionConfiguration struct { _ struct{} `type:"structure"` @@ -3986,6 +4701,9 @@ func (s *CreateIndexOutput) SetId(v string) *CreateIndexOutput { type DataSourceConfiguration struct { _ struct{} `type:"structure"` + // Provides configuration information for connecting to a Confluence data source. + ConfluenceConfiguration *ConfluenceConfiguration `type:"structure"` + // Provides information necessary to create a data source connector for a database. DatabaseConfiguration *DatabaseConfiguration `type:"structure"` @@ -4021,6 +4739,11 @@ func (s DataSourceConfiguration) GoString() string { // Validate inspects the fields of the type to determine if they are valid. func (s *DataSourceConfiguration) Validate() error { invalidParams := request.ErrInvalidParams{Context: "DataSourceConfiguration"} + if s.ConfluenceConfiguration != nil { + if err := s.ConfluenceConfiguration.Validate(); err != nil { + invalidParams.AddNested("ConfluenceConfiguration", err.(request.ErrInvalidParams)) + } + } if s.DatabaseConfiguration != nil { if err := s.DatabaseConfiguration.Validate(); err != nil { invalidParams.AddNested("DatabaseConfiguration", err.(request.ErrInvalidParams)) @@ -4058,6 +4781,12 @@ func (s *DataSourceConfiguration) Validate() error { return nil } +// SetConfluenceConfiguration sets the ConfluenceConfiguration field's value. +func (s *DataSourceConfiguration) SetConfluenceConfiguration(v *ConfluenceConfiguration) *DataSourceConfiguration { + s.ConfluenceConfiguration = v + return s +} + // SetDatabaseConfiguration sets the DatabaseConfiguration field's value. func (s *DataSourceConfiguration) SetDatabaseConfiguration(v *DatabaseConfiguration) *DataSourceConfiguration { s.DatabaseConfiguration = v @@ -10278,6 +11007,194 @@ func AdditionalResultAttributeValueType_Values() []string { } } +const ( + // ConfluenceAttachmentFieldNameAuthor is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameAuthor = "AUTHOR" + + // ConfluenceAttachmentFieldNameContentType is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameContentType = "CONTENT_TYPE" + + // ConfluenceAttachmentFieldNameCreatedDate is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameCreatedDate = "CREATED_DATE" + + // ConfluenceAttachmentFieldNameDisplayUrl is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameDisplayUrl = "DISPLAY_URL" + + // ConfluenceAttachmentFieldNameFileSize is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameFileSize = "FILE_SIZE" + + // ConfluenceAttachmentFieldNameItemType is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameItemType = "ITEM_TYPE" + + // ConfluenceAttachmentFieldNameParentId is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameParentId = "PARENT_ID" + + // ConfluenceAttachmentFieldNameSpaceKey is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameSpaceKey = "SPACE_KEY" + + // ConfluenceAttachmentFieldNameSpaceName is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameSpaceName = "SPACE_NAME" + + // ConfluenceAttachmentFieldNameUrl is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameUrl = "URL" + + // ConfluenceAttachmentFieldNameVersion is a ConfluenceAttachmentFieldName enum value + ConfluenceAttachmentFieldNameVersion = "VERSION" +) + +// ConfluenceAttachmentFieldName_Values returns all elements of the ConfluenceAttachmentFieldName enum +func ConfluenceAttachmentFieldName_Values() []string { + return []string{ + ConfluenceAttachmentFieldNameAuthor, + ConfluenceAttachmentFieldNameContentType, + ConfluenceAttachmentFieldNameCreatedDate, + ConfluenceAttachmentFieldNameDisplayUrl, + ConfluenceAttachmentFieldNameFileSize, + ConfluenceAttachmentFieldNameItemType, + ConfluenceAttachmentFieldNameParentId, + ConfluenceAttachmentFieldNameSpaceKey, + ConfluenceAttachmentFieldNameSpaceName, + ConfluenceAttachmentFieldNameUrl, + ConfluenceAttachmentFieldNameVersion, + } +} + +const ( + // ConfluenceBlogFieldNameAuthor is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameAuthor = "AUTHOR" + + // ConfluenceBlogFieldNameDisplayUrl is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameDisplayUrl = "DISPLAY_URL" + + // ConfluenceBlogFieldNameItemType is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameItemType = "ITEM_TYPE" + + // ConfluenceBlogFieldNameLabels is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameLabels = "LABELS" + + // ConfluenceBlogFieldNamePublishDate is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNamePublishDate = "PUBLISH_DATE" + + // ConfluenceBlogFieldNameSpaceKey is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameSpaceKey = "SPACE_KEY" + + // ConfluenceBlogFieldNameSpaceName is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameSpaceName = "SPACE_NAME" + + // ConfluenceBlogFieldNameUrl is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameUrl = "URL" + + // ConfluenceBlogFieldNameVersion is a ConfluenceBlogFieldName enum value + ConfluenceBlogFieldNameVersion = "VERSION" +) + +// ConfluenceBlogFieldName_Values returns all elements of the ConfluenceBlogFieldName enum +func ConfluenceBlogFieldName_Values() []string { + return []string{ + ConfluenceBlogFieldNameAuthor, + ConfluenceBlogFieldNameDisplayUrl, + ConfluenceBlogFieldNameItemType, + ConfluenceBlogFieldNameLabels, + ConfluenceBlogFieldNamePublishDate, + ConfluenceBlogFieldNameSpaceKey, + ConfluenceBlogFieldNameSpaceName, + ConfluenceBlogFieldNameUrl, + ConfluenceBlogFieldNameVersion, + } +} + +const ( + // ConfluencePageFieldNameAuthor is a ConfluencePageFieldName enum value + ConfluencePageFieldNameAuthor = "AUTHOR" + + // ConfluencePageFieldNameContentStatus is a ConfluencePageFieldName enum value + ConfluencePageFieldNameContentStatus = "CONTENT_STATUS" + + // ConfluencePageFieldNameCreatedDate is a ConfluencePageFieldName enum value + ConfluencePageFieldNameCreatedDate = "CREATED_DATE" + + // ConfluencePageFieldNameDisplayUrl is a ConfluencePageFieldName enum value + ConfluencePageFieldNameDisplayUrl = "DISPLAY_URL" + + // ConfluencePageFieldNameItemType is a ConfluencePageFieldName enum value + ConfluencePageFieldNameItemType = "ITEM_TYPE" + + // ConfluencePageFieldNameLabels is a ConfluencePageFieldName enum value + ConfluencePageFieldNameLabels = "LABELS" + + // ConfluencePageFieldNameModifiedDate is a ConfluencePageFieldName enum value + ConfluencePageFieldNameModifiedDate = "MODIFIED_DATE" + + // ConfluencePageFieldNameParentId is a ConfluencePageFieldName enum value + ConfluencePageFieldNameParentId = "PARENT_ID" + + // ConfluencePageFieldNameSpaceKey is a ConfluencePageFieldName enum value + ConfluencePageFieldNameSpaceKey = "SPACE_KEY" + + // ConfluencePageFieldNameSpaceName is a ConfluencePageFieldName enum value + ConfluencePageFieldNameSpaceName = "SPACE_NAME" + + // ConfluencePageFieldNameUrl is a ConfluencePageFieldName enum value + ConfluencePageFieldNameUrl = "URL" + + // ConfluencePageFieldNameVersion is a ConfluencePageFieldName enum value + ConfluencePageFieldNameVersion = "VERSION" +) + +// ConfluencePageFieldName_Values returns all elements of the ConfluencePageFieldName enum +func ConfluencePageFieldName_Values() []string { + return []string{ + ConfluencePageFieldNameAuthor, + ConfluencePageFieldNameContentStatus, + ConfluencePageFieldNameCreatedDate, + ConfluencePageFieldNameDisplayUrl, + ConfluencePageFieldNameItemType, + ConfluencePageFieldNameLabels, + ConfluencePageFieldNameModifiedDate, + ConfluencePageFieldNameParentId, + ConfluencePageFieldNameSpaceKey, + ConfluencePageFieldNameSpaceName, + ConfluencePageFieldNameUrl, + ConfluencePageFieldNameVersion, + } +} + +const ( + // ConfluenceSpaceFieldNameDisplayUrl is a ConfluenceSpaceFieldName enum value + ConfluenceSpaceFieldNameDisplayUrl = "DISPLAY_URL" + + // ConfluenceSpaceFieldNameItemType is a ConfluenceSpaceFieldName enum value + ConfluenceSpaceFieldNameItemType = "ITEM_TYPE" + + // ConfluenceSpaceFieldNameSpaceKey is a ConfluenceSpaceFieldName enum value + ConfluenceSpaceFieldNameSpaceKey = "SPACE_KEY" + + // ConfluenceSpaceFieldNameUrl is a ConfluenceSpaceFieldName enum value + ConfluenceSpaceFieldNameUrl = "URL" +) + +// ConfluenceSpaceFieldName_Values returns all elements of the ConfluenceSpaceFieldName enum +func ConfluenceSpaceFieldName_Values() []string { + return []string{ + ConfluenceSpaceFieldNameDisplayUrl, + ConfluenceSpaceFieldNameItemType, + ConfluenceSpaceFieldNameSpaceKey, + ConfluenceSpaceFieldNameUrl, + } +} + +const ( + // ConfluenceVersionServer is a ConfluenceVersion enum value + ConfluenceVersionServer = "SERVER" +) + +// ConfluenceVersion_Values returns all elements of the ConfluenceVersion enum +func ConfluenceVersion_Values() []string { + return []string{ + ConfluenceVersionServer, + } +} + const ( // ContentTypePdf is a ContentType enum value ContentTypePdf = "PDF" @@ -10391,6 +11308,9 @@ const ( // DataSourceTypeCustom is a DataSourceType enum value DataSourceTypeCustom = "CUSTOM" + + // DataSourceTypeConfluence is a DataSourceType enum value + DataSourceTypeConfluence = "CONFLUENCE" ) // DataSourceType_Values returns all elements of the DataSourceType enum @@ -10403,6 +11323,7 @@ func DataSourceType_Values() []string { DataSourceTypeOnedrive, DataSourceTypeServicenow, DataSourceTypeCustom, + DataSourceTypeConfluence, } } diff --git a/service/neptune/api.go b/service/neptune/api.go index b764c829512..80c4cdcc56a 100644 --- a/service/neptune/api.go +++ b/service/neptune/api.go @@ -762,6 +762,101 @@ func (c *Neptune) CreateDBClusterWithContext(ctx aws.Context, input *CreateDBClu return out, req.Send() } +const opCreateDBClusterEndpoint = "CreateDBClusterEndpoint" + +// CreateDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the CreateDBClusterEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateDBClusterEndpoint for more information on using the CreateDBClusterEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateDBClusterEndpointRequest method. +// req, resp := client.CreateDBClusterEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/CreateDBClusterEndpoint +func (c *Neptune) CreateDBClusterEndpointRequest(input *CreateDBClusterEndpointInput) (req *request.Request, output *CreateDBClusterEndpointOutput) { + op := &request.Operation{ + Name: opCreateDBClusterEndpoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateDBClusterEndpointInput{} + } + + output = &CreateDBClusterEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateDBClusterEndpoint API operation for Amazon Neptune. +// +// Creates a new custom endpoint and associates it with an Amazon Neptune DB +// cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Neptune's +// API operation CreateDBClusterEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterEndpointQuotaExceededFault "DBClusterEndpointQuotaExceededFault" +// The cluster already has the maximum number of custom endpoints. +// +// * ErrCodeDBClusterEndpointAlreadyExistsFault "DBClusterEndpointAlreadyExistsFault" +// The specified custom endpoint cannot be created because it already exists. +// +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier does not refer to an existing DB cluster. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The DB cluster is not in a valid state. +// +// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" +// DBInstanceIdentifier does not refer to an existing DB instance. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The specified DB instance is not in the available state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/CreateDBClusterEndpoint +func (c *Neptune) CreateDBClusterEndpoint(input *CreateDBClusterEndpointInput) (*CreateDBClusterEndpointOutput, error) { + req, out := c.CreateDBClusterEndpointRequest(input) + return out, req.Send() +} + +// CreateDBClusterEndpointWithContext is the same as CreateDBClusterEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See CreateDBClusterEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) CreateDBClusterEndpointWithContext(ctx aws.Context, input *CreateDBClusterEndpointInput, opts ...request.Option) (*CreateDBClusterEndpointOutput, error) { + req, out := c.CreateDBClusterEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateDBClusterParameterGroup = "CreateDBClusterParameterGroup" // CreateDBClusterParameterGroupRequest generates a "aws/request.Request" representing the @@ -1510,6 +1605,92 @@ func (c *Neptune) DeleteDBClusterWithContext(ctx aws.Context, input *DeleteDBClu return out, req.Send() } +const opDeleteDBClusterEndpoint = "DeleteDBClusterEndpoint" + +// DeleteDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the DeleteDBClusterEndpoint operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteDBClusterEndpoint for more information on using the DeleteDBClusterEndpoint +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteDBClusterEndpointRequest method. +// req, resp := client.DeleteDBClusterEndpointRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DeleteDBClusterEndpoint +func (c *Neptune) DeleteDBClusterEndpointRequest(input *DeleteDBClusterEndpointInput) (req *request.Request, output *DeleteDBClusterEndpointOutput) { + op := &request.Operation{ + Name: opDeleteDBClusterEndpoint, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteDBClusterEndpointInput{} + } + + output = &DeleteDBClusterEndpointOutput{} + req = c.newRequest(op, input, output) + return +} + +// DeleteDBClusterEndpoint API operation for Amazon Neptune. +// +// Deletes a custom endpoint and removes it from an Amazon Neptune DB cluster. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Neptune's +// API operation DeleteDBClusterEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidDBClusterEndpointStateFault "InvalidDBClusterEndpointStateFault" +// The requested operation cannot be performed on the endpoint while the endpoint +// is in this state. +// +// * ErrCodeDBClusterEndpointNotFoundFault "DBClusterEndpointNotFoundFault" +// The specified custom endpoint doesn't exist. +// +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The DB cluster is not in a valid state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DeleteDBClusterEndpoint +func (c *Neptune) DeleteDBClusterEndpoint(input *DeleteDBClusterEndpointInput) (*DeleteDBClusterEndpointOutput, error) { + req, out := c.DeleteDBClusterEndpointRequest(input) + return out, req.Send() +} + +// DeleteDBClusterEndpointWithContext is the same as DeleteDBClusterEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteDBClusterEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) DeleteDBClusterEndpointWithContext(ctx aws.Context, input *DeleteDBClusterEndpointInput, opts ...request.Option) (*DeleteDBClusterEndpointOutput, error) { + req, out := c.DeleteDBClusterEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteDBClusterParameterGroup = "DeleteDBClusterParameterGroup" // DeleteDBClusterParameterGroupRequest generates a "aws/request.Request" representing the @@ -2043,6 +2224,146 @@ func (c *Neptune) DeleteEventSubscriptionWithContext(ctx aws.Context, input *Del return out, req.Send() } +const opDescribeDBClusterEndpoints = "DescribeDBClusterEndpoints" + +// DescribeDBClusterEndpointsRequest generates a "aws/request.Request" representing the +// client's request for the DescribeDBClusterEndpoints operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeDBClusterEndpoints for more information on using the DescribeDBClusterEndpoints +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeDBClusterEndpointsRequest method. +// req, resp := client.DescribeDBClusterEndpointsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DescribeDBClusterEndpoints +func (c *Neptune) DescribeDBClusterEndpointsRequest(input *DescribeDBClusterEndpointsInput) (req *request.Request, output *DescribeDBClusterEndpointsOutput) { + op := &request.Operation{ + Name: opDescribeDBClusterEndpoints, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"Marker"}, + OutputTokens: []string{"Marker"}, + LimitToken: "MaxRecords", + TruncationToken: "", + }, + } + + if input == nil { + input = &DescribeDBClusterEndpointsInput{} + } + + output = &DescribeDBClusterEndpointsOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeDBClusterEndpoints API operation for Amazon Neptune. +// +// Returns information about endpoints for an Amazon Neptune DB cluster. +// +// This operation can also return information for Amazon RDS clusters and Amazon +// DocDB clusters. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Neptune's +// API operation DescribeDBClusterEndpoints for usage and error information. +// +// Returned Error Codes: +// * ErrCodeDBClusterNotFoundFault "DBClusterNotFoundFault" +// DBClusterIdentifier does not refer to an existing DB cluster. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/DescribeDBClusterEndpoints +func (c *Neptune) DescribeDBClusterEndpoints(input *DescribeDBClusterEndpointsInput) (*DescribeDBClusterEndpointsOutput, error) { + req, out := c.DescribeDBClusterEndpointsRequest(input) + return out, req.Send() +} + +// DescribeDBClusterEndpointsWithContext is the same as DescribeDBClusterEndpoints with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeDBClusterEndpoints for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) DescribeDBClusterEndpointsWithContext(ctx aws.Context, input *DescribeDBClusterEndpointsInput, opts ...request.Option) (*DescribeDBClusterEndpointsOutput, error) { + req, out := c.DescribeDBClusterEndpointsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// DescribeDBClusterEndpointsPages iterates over the pages of a DescribeDBClusterEndpoints operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See DescribeDBClusterEndpoints method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a DescribeDBClusterEndpoints operation. +// pageNum := 0 +// err := client.DescribeDBClusterEndpointsPages(params, +// func(page *neptune.DescribeDBClusterEndpointsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *Neptune) DescribeDBClusterEndpointsPages(input *DescribeDBClusterEndpointsInput, fn func(*DescribeDBClusterEndpointsOutput, bool) bool) error { + return c.DescribeDBClusterEndpointsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// DescribeDBClusterEndpointsPagesWithContext same as DescribeDBClusterEndpointsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) DescribeDBClusterEndpointsPagesWithContext(ctx aws.Context, input *DescribeDBClusterEndpointsInput, fn func(*DescribeDBClusterEndpointsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *DescribeDBClusterEndpointsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.DescribeDBClusterEndpointsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*DescribeDBClusterEndpointsOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opDescribeDBClusterParameterGroups = "DescribeDBClusterParameterGroups" // DescribeDBClusterParameterGroupsRequest generates a "aws/request.Request" representing the @@ -4294,57 +4615,149 @@ func (c *Neptune) ModifyDBClusterWithContext(ctx aws.Context, input *ModifyDBClu return out, req.Send() } -const opModifyDBClusterParameterGroup = "ModifyDBClusterParameterGroup" +const opModifyDBClusterEndpoint = "ModifyDBClusterEndpoint" -// ModifyDBClusterParameterGroupRequest generates a "aws/request.Request" representing the -// client's request for the ModifyDBClusterParameterGroup operation. The "output" return +// ModifyDBClusterEndpointRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDBClusterEndpoint operation. The "output" return // value will be populated with the request's response once the request completes // successfully. // // Use "Send" method on the returned Request to send the API call to the service. // the "output" return value is not valid until after Send returns without error. // -// See ModifyDBClusterParameterGroup for more information on using the ModifyDBClusterParameterGroup +// See ModifyDBClusterEndpoint for more information on using the ModifyDBClusterEndpoint // API call, and error handling. // // This method is useful when you want to inject custom logic or configuration // into the SDK's request lifecycle. Such as custom headers, or retry logic. // // -// // Example sending a request using the ModifyDBClusterParameterGroupRequest method. -// req, resp := client.ModifyDBClusterParameterGroupRequest(params) +// // Example sending a request using the ModifyDBClusterEndpointRequest method. +// req, resp := client.ModifyDBClusterEndpointRequest(params) // // err := req.Send() // if err == nil { // resp is now filled // fmt.Println(resp) // } // -// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/ModifyDBClusterParameterGroup -func (c *Neptune) ModifyDBClusterParameterGroupRequest(input *ModifyDBClusterParameterGroupInput) (req *request.Request, output *ResetDBClusterParameterGroupOutput) { +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/ModifyDBClusterEndpoint +func (c *Neptune) ModifyDBClusterEndpointRequest(input *ModifyDBClusterEndpointInput) (req *request.Request, output *ModifyDBClusterEndpointOutput) { op := &request.Operation{ - Name: opModifyDBClusterParameterGroup, + Name: opModifyDBClusterEndpoint, HTTPMethod: "POST", HTTPPath: "/", } if input == nil { - input = &ModifyDBClusterParameterGroupInput{} + input = &ModifyDBClusterEndpointInput{} } - output = &ResetDBClusterParameterGroupOutput{} + output = &ModifyDBClusterEndpointOutput{} req = c.newRequest(op, input, output) return } -// ModifyDBClusterParameterGroup API operation for Amazon Neptune. +// ModifyDBClusterEndpoint API operation for Amazon Neptune. // -// Modifies the parameters of a DB cluster parameter group. To modify more than -// one parameter, submit a list of the following: ParameterName, ParameterValue, -// and ApplyMethod. A maximum of 20 parameters can be modified in a single request. +// Modifies the properties of an endpoint in an Amazon Neptune DB cluster. // -// Changes to dynamic parameters are applied immediately. Changes to static -// parameters require a reboot without failover to the DB cluster associated -// with the parameter group before the change can take effect. +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon Neptune's +// API operation ModifyDBClusterEndpoint for usage and error information. +// +// Returned Error Codes: +// * ErrCodeInvalidDBClusterStateFault "InvalidDBClusterStateFault" +// The DB cluster is not in a valid state. +// +// * ErrCodeInvalidDBClusterEndpointStateFault "InvalidDBClusterEndpointStateFault" +// The requested operation cannot be performed on the endpoint while the endpoint +// is in this state. +// +// * ErrCodeDBClusterEndpointNotFoundFault "DBClusterEndpointNotFoundFault" +// The specified custom endpoint doesn't exist. +// +// * ErrCodeDBInstanceNotFoundFault "DBInstanceNotFound" +// DBInstanceIdentifier does not refer to an existing DB instance. +// +// * ErrCodeInvalidDBInstanceStateFault "InvalidDBInstanceState" +// The specified DB instance is not in the available state. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/ModifyDBClusterEndpoint +func (c *Neptune) ModifyDBClusterEndpoint(input *ModifyDBClusterEndpointInput) (*ModifyDBClusterEndpointOutput, error) { + req, out := c.ModifyDBClusterEndpointRequest(input) + return out, req.Send() +} + +// ModifyDBClusterEndpointWithContext is the same as ModifyDBClusterEndpoint with the addition of +// the ability to pass a context and additional request options. +// +// See ModifyDBClusterEndpoint for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *Neptune) ModifyDBClusterEndpointWithContext(ctx aws.Context, input *ModifyDBClusterEndpointInput, opts ...request.Option) (*ModifyDBClusterEndpointOutput, error) { + req, out := c.ModifyDBClusterEndpointRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opModifyDBClusterParameterGroup = "ModifyDBClusterParameterGroup" + +// ModifyDBClusterParameterGroupRequest generates a "aws/request.Request" representing the +// client's request for the ModifyDBClusterParameterGroup operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ModifyDBClusterParameterGroup for more information on using the ModifyDBClusterParameterGroup +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ModifyDBClusterParameterGroupRequest method. +// req, resp := client.ModifyDBClusterParameterGroupRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/neptune-2014-10-31/ModifyDBClusterParameterGroup +func (c *Neptune) ModifyDBClusterParameterGroupRequest(input *ModifyDBClusterParameterGroupInput) (req *request.Request, output *ResetDBClusterParameterGroupOutput) { + op := &request.Operation{ + Name: opModifyDBClusterParameterGroup, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ModifyDBClusterParameterGroupInput{} + } + + output = &ResetDBClusterParameterGroupOutput{} + req = c.newRequest(op, input, output) + return +} + +// ModifyDBClusterParameterGroup API operation for Amazon Neptune. +// +// Modifies the parameters of a DB cluster parameter group. To modify more than +// one parameter, submit a list of the following: ParameterName, ParameterValue, +// and ApplyMethod. A maximum of 20 parameters can be modified in a single request. +// +// Changes to dynamic parameters are applied immediately. Changes to static +// parameters require a reboot without failover to the DB cluster associated +// with the parameter group before the change can take effect. // // After you create a DB cluster parameter group, you should wait at least 5 // minutes before creating your first DB cluster that uses that DB cluster parameter @@ -6010,6 +6423,10 @@ type AddRoleToDBClusterInput struct { // DBClusterIdentifier is a required field DBClusterIdentifier *string `type:"string" required:"true"` + // The name of the feature for the Neptune DB cluster that the IAM role is to + // be associated with. For the list of supported feature names, see DBEngineVersion. + FeatureName *string `type:"string"` + // The Amazon Resource Name (ARN) of the IAM role to associate with the Neptune // DB cluster, for example arn:aws:iam::123456789012:role/NeptuneAccessRole. // @@ -6049,6 +6466,12 @@ func (s *AddRoleToDBClusterInput) SetDBClusterIdentifier(v string) *AddRoleToDBC return s } +// SetFeatureName sets the FeatureName field's value. +func (s *AddRoleToDBClusterInput) SetFeatureName(v string) *AddRoleToDBClusterInput { + s.FeatureName = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *AddRoleToDBClusterInput) SetRoleArn(v string) *AddRoleToDBClusterInput { s.RoleArn = &v @@ -6827,6 +7250,229 @@ func (s *CopyDBParameterGroupOutput) SetDBParameterGroup(v *DBParameterGroup) *C return s } +type CreateDBClusterEndpointInput struct { + _ struct{} `type:"structure"` + + // The identifier to use for the new endpoint. This parameter is stored as a + // lowercase string. + // + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + // + // DBClusterIdentifier is a required field + DBClusterIdentifier *string `type:"string" required:"true"` + + // The type of the endpoint. One of: READER, WRITER, ANY. + // + // EndpointType is a required field + EndpointType *string `type:"string" required:"true"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The tags to be assigned to the Amazon Neptune resource. + Tags []*Tag `locationNameList:"Tag" type:"list"` +} + +// String returns the string representation +func (s CreateDBClusterEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) + } + if s.DBClusterIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterIdentifier")) + } + if s.EndpointType == nil { + invalidParams.Add(request.NewErrParamRequired("EndpointType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *CreateDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *CreateDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterEndpointInput) SetDBClusterIdentifier(v string) *CreateDBClusterEndpointInput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *CreateDBClusterEndpointInput) SetEndpointType(v string) *CreateDBClusterEndpointInput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *CreateDBClusterEndpointInput) SetExcludedMembers(v []*string) *CreateDBClusterEndpointInput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *CreateDBClusterEndpointInput) SetStaticMembers(v []*string) *CreateDBClusterEndpointInput { + s.StaticMembers = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateDBClusterEndpointInput) SetTags(v []*Tag) *CreateDBClusterEndpointInput { + s.Tags = v + return s +} + +// This data type represents the information you need to connect to an Amazon +// Neptune DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon Neptune DB instance endpoints, +// see Endpoint. +type CreateDBClusterEndpointOutput struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, WRITER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // inactive, modifying. The inactive state applies to an endpoint that cannot + // be used for a certain kind of cluster, such as a writer endpoint for a read-only + // secondary cluster in a global database. + Status *string `type:"string"` +} + +// String returns the string representation +func (s CreateDBClusterEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateDBClusterEndpointOutput) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *CreateDBClusterEndpointOutput) SetCustomEndpointType(v string) *CreateDBClusterEndpointOutput { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *CreateDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *CreateDBClusterEndpointOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *CreateDBClusterEndpointOutput) SetEndpoint(v string) *CreateDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *CreateDBClusterEndpointOutput) SetEndpointType(v string) *CreateDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *CreateDBClusterEndpointOutput) SetExcludedMembers(v []*string) *CreateDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *CreateDBClusterEndpointOutput) SetStaticMembers(v []*string) *CreateDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *CreateDBClusterEndpointOutput) SetStatus(v string) *CreateDBClusterEndpointOutput { + s.Status = &v + return s +} + type CreateDBClusterInput struct { _ struct{} `type:"structure"` @@ -6892,10 +7538,7 @@ type CreateDBClusterInput struct { // Logs. EnableCloudwatchLogsExports []*string `type:"list"` - // True to enable mapping of AWS Identity and Access Management (IAM) accounts - // to database accounts, and otherwise false. - // - // Default: false + // Not supported by Neptune. EnableIAMDatabaseAuthentication *bool `type:"boolean"` // The name of the database engine to be used for this DB cluster. @@ -6905,10 +7548,9 @@ type CreateDBClusterInput struct { // Engine is a required field Engine *string `type:"string" required:"true"` - // The version number of the database engine to use. Currently, setting this - // parameter has no effect. + // The version number of the database engine to use for the new DB cluster. // - // Example: 1.0.1 + // Example: 1.0.2.1 EngineVersion *string `type:"string"` // The AWS KMS key identifier for an encrypted DB cluster. @@ -8770,6 +9412,132 @@ func (s *DBCluster) SetVpcSecurityGroups(v []*VpcSecurityGroupMembership) *DBClu return s } +// This data type represents the information you need to connect to an Amazon +// Neptune DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon Neptune DB instance endpoints, +// see Endpoint. +type DBClusterEndpoint struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, WRITER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // inactive, modifying. The inactive state applies to an endpoint that cannot + // be used for a certain kind of cluster, such as a writer endpoint for a read-only + // secondary cluster in a global database. + Status *string `type:"string"` +} + +// String returns the string representation +func (s DBClusterEndpoint) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBClusterEndpoint) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *DBClusterEndpoint) SetCustomEndpointType(v string) *DBClusterEndpoint { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointArn(v string) *DBClusterEndpoint { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointIdentifier(v string) *DBClusterEndpoint { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterEndpointResourceIdentifier(v string) *DBClusterEndpoint { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DBClusterEndpoint) SetDBClusterIdentifier(v string) *DBClusterEndpoint { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *DBClusterEndpoint) SetEndpoint(v string) *DBClusterEndpoint { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *DBClusterEndpoint) SetEndpointType(v string) *DBClusterEndpoint { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *DBClusterEndpoint) SetExcludedMembers(v []*string) *DBClusterEndpoint { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *DBClusterEndpoint) SetStaticMembers(v []*string) *DBClusterEndpoint { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DBClusterEndpoint) SetStatus(v string) *DBClusterEndpoint { + s.Status = &v + return s +} + // Contains information about an instance that is part of a DB cluster. type DBClusterMember struct { _ struct{} `type:"structure"` @@ -8918,6 +9686,10 @@ func (s *DBClusterParameterGroup) SetDescription(v string) *DBClusterParameterGr type DBClusterRole struct { _ struct{} `type:"structure"` + // The name of the feature associated with the AWS Identity and Access Management + // (IAM) role. For the list of supported feature names, see DBEngineVersion. + FeatureName *string `type:"string"` + // The Amazon Resource Name (ARN) of the IAM role that is associated with the // DB cluster. RoleArn *string `type:"string"` @@ -8946,6 +9718,12 @@ func (s DBClusterRole) GoString() string { return s.String() } +// SetFeatureName sets the FeatureName field's value. +func (s *DBClusterRole) SetFeatureName(v string) *DBClusterRole { + s.FeatureName = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *DBClusterRole) SetRoleArn(v string) *DBClusterRole { s.RoleArn = &v @@ -10106,68 +10884,233 @@ func (s *DBSecurityGroupMembership) SetStatus(v string) *DBSecurityGroupMembersh type DBSubnetGroup struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) for the DB subnet group. - DBSubnetGroupArn *string `type:"string"` + // The Amazon Resource Name (ARN) for the DB subnet group. + DBSubnetGroupArn *string `type:"string"` + + // Provides the description of the DB subnet group. + DBSubnetGroupDescription *string `type:"string"` + + // The name of the DB subnet group. + DBSubnetGroupName *string `type:"string"` + + // Provides the status of the DB subnet group. + SubnetGroupStatus *string `type:"string"` + + // Contains a list of Subnet elements. + Subnets []*Subnet `locationNameList:"Subnet" type:"list"` + + // Provides the VpcId of the DB subnet group. + VpcId *string `type:"string"` +} + +// String returns the string representation +func (s DBSubnetGroup) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DBSubnetGroup) GoString() string { + return s.String() +} + +// SetDBSubnetGroupArn sets the DBSubnetGroupArn field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupArn(v string) *DBSubnetGroup { + s.DBSubnetGroupArn = &v + return s +} + +// SetDBSubnetGroupDescription sets the DBSubnetGroupDescription field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupDescription(v string) *DBSubnetGroup { + s.DBSubnetGroupDescription = &v + return s +} + +// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. +func (s *DBSubnetGroup) SetDBSubnetGroupName(v string) *DBSubnetGroup { + s.DBSubnetGroupName = &v + return s +} + +// SetSubnetGroupStatus sets the SubnetGroupStatus field's value. +func (s *DBSubnetGroup) SetSubnetGroupStatus(v string) *DBSubnetGroup { + s.SubnetGroupStatus = &v + return s +} + +// SetSubnets sets the Subnets field's value. +func (s *DBSubnetGroup) SetSubnets(v []*Subnet) *DBSubnetGroup { + s.Subnets = v + return s +} + +// SetVpcId sets the VpcId field's value. +func (s *DBSubnetGroup) SetVpcId(v string) *DBSubnetGroup { + s.VpcId = &v + return s +} + +type DeleteDBClusterEndpointInput struct { + _ struct{} `type:"structure"` + + // The identifier associated with the custom endpoint. This parameter is stored + // as a lowercase string. + // + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s DeleteDBClusterEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DeleteDBClusterEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DeleteDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DeleteDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DeleteDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *DeleteDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// This data type represents the information you need to connect to an Amazon +// Neptune DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type DeleteDBClusterEndpointOutput struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, WRITER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` - // Provides the description of the DB subnet group. - DBSubnetGroupDescription *string `type:"string"` + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` - // The name of the DB subnet group. - DBSubnetGroupName *string `type:"string"` + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` - // Provides the status of the DB subnet group. - SubnetGroupStatus *string `type:"string"` + // The DNS address of the endpoint. + Endpoint *string `type:"string"` - // Contains a list of Subnet elements. - Subnets []*Subnet `locationNameList:"Subnet" type:"list"` + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` - // Provides the VpcId of the DB subnet group. - VpcId *string `type:"string"` + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // inactive, modifying. The inactive state applies to an endpoint that cannot + // be used for a certain kind of cluster, such as a writer endpoint for a read-only + // secondary cluster in a global database. + Status *string `type:"string"` } // String returns the string representation -func (s DBSubnetGroup) String() string { +func (s DeleteDBClusterEndpointOutput) String() string { return awsutil.Prettify(s) } // GoString returns the string representation -func (s DBSubnetGroup) GoString() string { +func (s DeleteDBClusterEndpointOutput) GoString() string { return s.String() } -// SetDBSubnetGroupArn sets the DBSubnetGroupArn field's value. -func (s *DBSubnetGroup) SetDBSubnetGroupArn(v string) *DBSubnetGroup { - s.DBSubnetGroupArn = &v +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *DeleteDBClusterEndpointOutput) SetCustomEndpointType(v string) *DeleteDBClusterEndpointOutput { + s.CustomEndpointType = &v return s } -// SetDBSubnetGroupDescription sets the DBSubnetGroupDescription field's value. -func (s *DBSubnetGroup) SetDBSubnetGroupDescription(v string) *DBSubnetGroup { - s.DBSubnetGroupDescription = &v +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v return s } -// SetDBSubnetGroupName sets the DBSubnetGroupName field's value. -func (s *DBSubnetGroup) SetDBSubnetGroupName(v string) *DBSubnetGroup { - s.DBSubnetGroupName = &v +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v return s } -// SetSubnetGroupStatus sets the SubnetGroupStatus field's value. -func (s *DBSubnetGroup) SetSubnetGroupStatus(v string) *DBSubnetGroup { - s.SubnetGroupStatus = &v +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v return s } -// SetSubnets sets the Subnets field's value. -func (s *DBSubnetGroup) SetSubnets(v []*Subnet) *DBSubnetGroup { - s.Subnets = v +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DeleteDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *DeleteDBClusterEndpointOutput { + s.DBClusterIdentifier = &v return s } -// SetVpcId sets the VpcId field's value. -func (s *DBSubnetGroup) SetVpcId(v string) *DBSubnetGroup { - s.VpcId = &v +// SetEndpoint sets the Endpoint field's value. +func (s *DeleteDBClusterEndpointOutput) SetEndpoint(v string) *DeleteDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *DeleteDBClusterEndpointOutput) SetEndpointType(v string) *DeleteDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *DeleteDBClusterEndpointOutput) SetExcludedMembers(v []*string) *DeleteDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *DeleteDBClusterEndpointOutput) SetStaticMembers(v []*string) *DeleteDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *DeleteDBClusterEndpointOutput) SetStatus(v string) *DeleteDBClusterEndpointOutput { + s.Status = &v return s } @@ -10699,6 +11642,136 @@ func (s *DeleteEventSubscriptionOutput) SetEventSubscription(v *EventSubscriptio return s } +type DescribeDBClusterEndpointsInput struct { + _ struct{} `type:"structure"` + + // The identifier of the endpoint to describe. This parameter is stored as a + // lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // A set of name-value pairs that define which endpoints to include in the output. + // The filters are specified as name-value pairs, in the format Name=endpoint_type,Values=endpoint_type1,endpoint_type2,.... + // Name can be one of: db-cluster-endpoint-type, db-cluster-endpoint-custom-type, + // db-cluster-endpoint-id, db-cluster-endpoint-status. Values for the db-cluster-endpoint-type + // filter can be one or more of: reader, writer, custom. Values for the db-cluster-endpoint-custom-type + // filter can be one or more of: reader, any. Values for the db-cluster-endpoint-status + // filter can be one or more of: available, creating, deleting, inactive, modifying. + Filters []*Filter `locationNameList:"Filter" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterEndpoints + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` + + // The maximum number of records to include in the response. If more records + // exist than the specified MaxRecords value, a pagination token called a marker + // is included in the response so you can retrieve the remaining results. + // + // Default: 100 + // + // Constraints: Minimum 20, maximum 100. + MaxRecords *int64 `type:"integer"` +} + +// String returns the string representation +func (s DescribeDBClusterEndpointsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterEndpointsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeDBClusterEndpointsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "DescribeDBClusterEndpointsInput"} + if s.Filters != nil { + for i, v := range s.Filters { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Filters", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *DescribeDBClusterEndpointsInput) SetDBClusterEndpointIdentifier(v string) *DescribeDBClusterEndpointsInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *DescribeDBClusterEndpointsInput) SetDBClusterIdentifier(v string) *DescribeDBClusterEndpointsInput { + s.DBClusterIdentifier = &v + return s +} + +// SetFilters sets the Filters field's value. +func (s *DescribeDBClusterEndpointsInput) SetFilters(v []*Filter) *DescribeDBClusterEndpointsInput { + s.Filters = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterEndpointsInput) SetMarker(v string) *DescribeDBClusterEndpointsInput { + s.Marker = &v + return s +} + +// SetMaxRecords sets the MaxRecords field's value. +func (s *DescribeDBClusterEndpointsInput) SetMaxRecords(v int64) *DescribeDBClusterEndpointsInput { + s.MaxRecords = &v + return s +} + +type DescribeDBClusterEndpointsOutput struct { + _ struct{} `type:"structure"` + + // Contains the details of the endpoints associated with the cluster and matching + // any filter conditions. + DBClusterEndpoints []*DBClusterEndpoint `locationNameList:"DBClusterEndpointList" type:"list"` + + // An optional pagination token provided by a previous DescribeDBClusterEndpoints + // request. If this parameter is specified, the response includes only records + // beyond the marker, up to the value specified by MaxRecords. + Marker *string `type:"string"` +} + +// String returns the string representation +func (s DescribeDBClusterEndpointsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DescribeDBClusterEndpointsOutput) GoString() string { + return s.String() +} + +// SetDBClusterEndpoints sets the DBClusterEndpoints field's value. +func (s *DescribeDBClusterEndpointsOutput) SetDBClusterEndpoints(v []*DBClusterEndpoint) *DescribeDBClusterEndpointsOutput { + s.DBClusterEndpoints = v + return s +} + +// SetMarker sets the Marker field's value. +func (s *DescribeDBClusterEndpointsOutput) SetMarker(v string) *DescribeDBClusterEndpointsOutput { + s.Marker = &v + return s +} + type DescribeDBClusterParameterGroupsInput struct { _ struct{} `type:"structure"` @@ -13019,6 +14092,9 @@ func (s *DoubleRange) SetTo(v float64) *DoubleRange { } // Specifies a connection endpoint. +// +// For the data structure that represents Amazon Neptune DB cluster endpoints, +// see DBClusterEndpoint. type Endpoint struct { _ struct{} `type:"structure"` @@ -13524,6 +14600,200 @@ func (s *ListTagsForResourceOutput) SetTagList(v []*Tag) *ListTagsForResourceOut return s } +type ModifyDBClusterEndpointInput struct { + _ struct{} `type:"structure"` + + // The identifier of the endpoint to modify. This parameter is stored as a lowercase + // string. + // + // DBClusterEndpointIdentifier is a required field + DBClusterEndpointIdentifier *string `type:"string" required:"true"` + + // The type of the endpoint. One of: READER, WRITER, ANY. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` +} + +// String returns the string representation +func (s ModifyDBClusterEndpointInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterEndpointInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ModifyDBClusterEndpointInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "ModifyDBClusterEndpointInput"} + if s.DBClusterEndpointIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("DBClusterEndpointIdentifier")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *ModifyDBClusterEndpointInput) SetDBClusterEndpointIdentifier(v string) *ModifyDBClusterEndpointInput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *ModifyDBClusterEndpointInput) SetEndpointType(v string) *ModifyDBClusterEndpointInput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *ModifyDBClusterEndpointInput) SetExcludedMembers(v []*string) *ModifyDBClusterEndpointInput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *ModifyDBClusterEndpointInput) SetStaticMembers(v []*string) *ModifyDBClusterEndpointInput { + s.StaticMembers = v + return s +} + +// This data type represents the information you need to connect to an Amazon +// Aurora DB cluster. This data type is used as a response element in the following +// actions: +// +// * CreateDBClusterEndpoint +// +// * DescribeDBClusterEndpoints +// +// * ModifyDBClusterEndpoint +// +// * DeleteDBClusterEndpoint +// +// For the data structure that represents Amazon RDS DB instance endpoints, +// see Endpoint. +type ModifyDBClusterEndpointOutput struct { + _ struct{} `type:"structure"` + + // The type associated with a custom endpoint. One of: READER, WRITER, ANY. + CustomEndpointType *string `type:"string"` + + // The Amazon Resource Name (ARN) for the endpoint. + DBClusterEndpointArn *string `type:"string"` + + // The identifier associated with the endpoint. This parameter is stored as + // a lowercase string. + DBClusterEndpointIdentifier *string `type:"string"` + + // A unique system-generated identifier for an endpoint. It remains the same + // for the whole life of the endpoint. + DBClusterEndpointResourceIdentifier *string `type:"string"` + + // The DB cluster identifier of the DB cluster associated with the endpoint. + // This parameter is stored as a lowercase string. + DBClusterIdentifier *string `type:"string"` + + // The DNS address of the endpoint. + Endpoint *string `type:"string"` + + // The type of the endpoint. One of: READER, WRITER, CUSTOM. + EndpointType *string `type:"string"` + + // List of DB instance identifiers that aren't part of the custom endpoint group. + // All other eligible instances are reachable through the custom endpoint. Only + // relevant if the list of static members is empty. + ExcludedMembers []*string `type:"list"` + + // List of DB instance identifiers that are part of the custom endpoint group. + StaticMembers []*string `type:"list"` + + // The current status of the endpoint. One of: creating, available, deleting, + // inactive, modifying. The inactive state applies to an endpoint that cannot + // be used for a certain kind of cluster, such as a writer endpoint for a read-only + // secondary cluster in a global database. + Status *string `type:"string"` +} + +// String returns the string representation +func (s ModifyDBClusterEndpointOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s ModifyDBClusterEndpointOutput) GoString() string { + return s.String() +} + +// SetCustomEndpointType sets the CustomEndpointType field's value. +func (s *ModifyDBClusterEndpointOutput) SetCustomEndpointType(v string) *ModifyDBClusterEndpointOutput { + s.CustomEndpointType = &v + return s +} + +// SetDBClusterEndpointArn sets the DBClusterEndpointArn field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointArn(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointArn = &v + return s +} + +// SetDBClusterEndpointIdentifier sets the DBClusterEndpointIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointIdentifier = &v + return s +} + +// SetDBClusterEndpointResourceIdentifier sets the DBClusterEndpointResourceIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterEndpointResourceIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterEndpointResourceIdentifier = &v + return s +} + +// SetDBClusterIdentifier sets the DBClusterIdentifier field's value. +func (s *ModifyDBClusterEndpointOutput) SetDBClusterIdentifier(v string) *ModifyDBClusterEndpointOutput { + s.DBClusterIdentifier = &v + return s +} + +// SetEndpoint sets the Endpoint field's value. +func (s *ModifyDBClusterEndpointOutput) SetEndpoint(v string) *ModifyDBClusterEndpointOutput { + s.Endpoint = &v + return s +} + +// SetEndpointType sets the EndpointType field's value. +func (s *ModifyDBClusterEndpointOutput) SetEndpointType(v string) *ModifyDBClusterEndpointOutput { + s.EndpointType = &v + return s +} + +// SetExcludedMembers sets the ExcludedMembers field's value. +func (s *ModifyDBClusterEndpointOutput) SetExcludedMembers(v []*string) *ModifyDBClusterEndpointOutput { + s.ExcludedMembers = v + return s +} + +// SetStaticMembers sets the StaticMembers field's value. +func (s *ModifyDBClusterEndpointOutput) SetStaticMembers(v []*string) *ModifyDBClusterEndpointOutput { + s.StaticMembers = v + return s +} + +// SetStatus sets the Status field's value. +func (s *ModifyDBClusterEndpointOutput) SetStatus(v string) *ModifyDBClusterEndpointOutput { + s.Status = &v + return s +} + type ModifyDBClusterInput struct { _ struct{} `type:"structure"` @@ -13581,11 +14851,13 @@ type ModifyDBClusterInput struct { // Default: false EnableIAMDatabaseAuthentication *bool `type:"boolean"` - // The version number of the database engine. Currently, setting this parameter - // has no effect. To upgrade your database engine to the most recent release, - // use the ApplyPendingMaintenanceAction API. + // The version number of the database engine to which you want to upgrade. Changing + // this parameter results in an outage. The change is applied during the next + // maintenance window unless the ApplyImmediately parameter is set to true. // - // For a list of valid engine versions, see CreateDBInstance, or call DescribeDBEngineVersions. + // For a list of valid engine versions, see Engine Releases for Amazon Neptune + // (https://docs.aws.amazon.com/neptune/latest/userguide/engine-releases.html), + // or call DescribeDBEngineVersions (https://docs.aws.amazon.com/neptune/latest/userguide/api-other-apis.html#DescribeDBEngineVersions). EngineVersion *string `type:"string"` // The new password for the master database user. This password can contain @@ -15566,6 +16838,10 @@ type RemoveRoleFromDBClusterInput struct { // DBClusterIdentifier is a required field DBClusterIdentifier *string `type:"string" required:"true"` + // The name of the feature for the DB cluster that the IAM role is to be disassociated + // from. For the list of supported feature names, see DBEngineVersion. + FeatureName *string `type:"string"` + // The Amazon Resource Name (ARN) of the IAM role to disassociate from the DB // cluster, for example arn:aws:iam::123456789012:role/NeptuneAccessRole. // @@ -15605,6 +16881,12 @@ func (s *RemoveRoleFromDBClusterInput) SetDBClusterIdentifier(v string) *RemoveR return s } +// SetFeatureName sets the FeatureName field's value. +func (s *RemoveRoleFromDBClusterInput) SetFeatureName(v string) *RemoveRoleFromDBClusterInput { + s.FeatureName = &v + return s +} + // SetRoleArn sets the RoleArn field's value. func (s *RemoveRoleFromDBClusterInput) SetRoleArn(v string) *RemoveRoleFromDBClusterInput { s.RoleArn = &v diff --git a/service/neptune/errors.go b/service/neptune/errors.go index 2d94bef6115..d7fff6471c7 100644 --- a/service/neptune/errors.go +++ b/service/neptune/errors.go @@ -26,6 +26,24 @@ const ( // User already has a DB cluster with the given identifier. ErrCodeDBClusterAlreadyExistsFault = "DBClusterAlreadyExistsFault" + // ErrCodeDBClusterEndpointAlreadyExistsFault for service response error code + // "DBClusterEndpointAlreadyExistsFault". + // + // The specified custom endpoint cannot be created because it already exists. + ErrCodeDBClusterEndpointAlreadyExistsFault = "DBClusterEndpointAlreadyExistsFault" + + // ErrCodeDBClusterEndpointNotFoundFault for service response error code + // "DBClusterEndpointNotFoundFault". + // + // The specified custom endpoint doesn't exist. + ErrCodeDBClusterEndpointNotFoundFault = "DBClusterEndpointNotFoundFault" + + // ErrCodeDBClusterEndpointQuotaExceededFault for service response error code + // "DBClusterEndpointQuotaExceededFault". + // + // The cluster already has the maximum number of custom endpoints. + ErrCodeDBClusterEndpointQuotaExceededFault = "DBClusterEndpointQuotaExceededFault" + // ErrCodeDBClusterNotFoundFault for service response error code // "DBClusterNotFoundFault". // @@ -205,6 +223,13 @@ const ( // Availability Zones that have more storage available. ErrCodeInsufficientStorageClusterCapacityFault = "InsufficientStorageClusterCapacity" + // ErrCodeInvalidDBClusterEndpointStateFault for service response error code + // "InvalidDBClusterEndpointStateFault". + // + // The requested operation cannot be performed on the endpoint while the endpoint + // is in this state. + ErrCodeInvalidDBClusterEndpointStateFault = "InvalidDBClusterEndpointStateFault" + // ErrCodeInvalidDBClusterSnapshotStateFault for service response error code // "InvalidDBClusterSnapshotStateFault". // diff --git a/service/neptune/neptuneiface/interface.go b/service/neptune/neptuneiface/interface.go index b4a9499014f..081bb527ee8 100644 --- a/service/neptune/neptuneiface/interface.go +++ b/service/neptune/neptuneiface/interface.go @@ -92,6 +92,10 @@ type NeptuneAPI interface { CreateDBClusterWithContext(aws.Context, *neptune.CreateDBClusterInput, ...request.Option) (*neptune.CreateDBClusterOutput, error) CreateDBClusterRequest(*neptune.CreateDBClusterInput) (*request.Request, *neptune.CreateDBClusterOutput) + CreateDBClusterEndpoint(*neptune.CreateDBClusterEndpointInput) (*neptune.CreateDBClusterEndpointOutput, error) + CreateDBClusterEndpointWithContext(aws.Context, *neptune.CreateDBClusterEndpointInput, ...request.Option) (*neptune.CreateDBClusterEndpointOutput, error) + CreateDBClusterEndpointRequest(*neptune.CreateDBClusterEndpointInput) (*request.Request, *neptune.CreateDBClusterEndpointOutput) + CreateDBClusterParameterGroup(*neptune.CreateDBClusterParameterGroupInput) (*neptune.CreateDBClusterParameterGroupOutput, error) CreateDBClusterParameterGroupWithContext(aws.Context, *neptune.CreateDBClusterParameterGroupInput, ...request.Option) (*neptune.CreateDBClusterParameterGroupOutput, error) CreateDBClusterParameterGroupRequest(*neptune.CreateDBClusterParameterGroupInput) (*request.Request, *neptune.CreateDBClusterParameterGroupOutput) @@ -120,6 +124,10 @@ type NeptuneAPI interface { DeleteDBClusterWithContext(aws.Context, *neptune.DeleteDBClusterInput, ...request.Option) (*neptune.DeleteDBClusterOutput, error) DeleteDBClusterRequest(*neptune.DeleteDBClusterInput) (*request.Request, *neptune.DeleteDBClusterOutput) + DeleteDBClusterEndpoint(*neptune.DeleteDBClusterEndpointInput) (*neptune.DeleteDBClusterEndpointOutput, error) + DeleteDBClusterEndpointWithContext(aws.Context, *neptune.DeleteDBClusterEndpointInput, ...request.Option) (*neptune.DeleteDBClusterEndpointOutput, error) + DeleteDBClusterEndpointRequest(*neptune.DeleteDBClusterEndpointInput) (*request.Request, *neptune.DeleteDBClusterEndpointOutput) + DeleteDBClusterParameterGroup(*neptune.DeleteDBClusterParameterGroupInput) (*neptune.DeleteDBClusterParameterGroupOutput, error) DeleteDBClusterParameterGroupWithContext(aws.Context, *neptune.DeleteDBClusterParameterGroupInput, ...request.Option) (*neptune.DeleteDBClusterParameterGroupOutput, error) DeleteDBClusterParameterGroupRequest(*neptune.DeleteDBClusterParameterGroupInput) (*request.Request, *neptune.DeleteDBClusterParameterGroupOutput) @@ -144,6 +152,13 @@ type NeptuneAPI interface { DeleteEventSubscriptionWithContext(aws.Context, *neptune.DeleteEventSubscriptionInput, ...request.Option) (*neptune.DeleteEventSubscriptionOutput, error) DeleteEventSubscriptionRequest(*neptune.DeleteEventSubscriptionInput) (*request.Request, *neptune.DeleteEventSubscriptionOutput) + DescribeDBClusterEndpoints(*neptune.DescribeDBClusterEndpointsInput) (*neptune.DescribeDBClusterEndpointsOutput, error) + DescribeDBClusterEndpointsWithContext(aws.Context, *neptune.DescribeDBClusterEndpointsInput, ...request.Option) (*neptune.DescribeDBClusterEndpointsOutput, error) + DescribeDBClusterEndpointsRequest(*neptune.DescribeDBClusterEndpointsInput) (*request.Request, *neptune.DescribeDBClusterEndpointsOutput) + + DescribeDBClusterEndpointsPages(*neptune.DescribeDBClusterEndpointsInput, func(*neptune.DescribeDBClusterEndpointsOutput, bool) bool) error + DescribeDBClusterEndpointsPagesWithContext(aws.Context, *neptune.DescribeDBClusterEndpointsInput, func(*neptune.DescribeDBClusterEndpointsOutput, bool) bool, ...request.Option) error + DescribeDBClusterParameterGroups(*neptune.DescribeDBClusterParameterGroupsInput) (*neptune.DescribeDBClusterParameterGroupsOutput, error) DescribeDBClusterParameterGroupsWithContext(aws.Context, *neptune.DescribeDBClusterParameterGroupsInput, ...request.Option) (*neptune.DescribeDBClusterParameterGroupsOutput, error) DescribeDBClusterParameterGroupsRequest(*neptune.DescribeDBClusterParameterGroupsInput) (*request.Request, *neptune.DescribeDBClusterParameterGroupsOutput) @@ -255,6 +270,10 @@ type NeptuneAPI interface { ModifyDBClusterWithContext(aws.Context, *neptune.ModifyDBClusterInput, ...request.Option) (*neptune.ModifyDBClusterOutput, error) ModifyDBClusterRequest(*neptune.ModifyDBClusterInput) (*request.Request, *neptune.ModifyDBClusterOutput) + ModifyDBClusterEndpoint(*neptune.ModifyDBClusterEndpointInput) (*neptune.ModifyDBClusterEndpointOutput, error) + ModifyDBClusterEndpointWithContext(aws.Context, *neptune.ModifyDBClusterEndpointInput, ...request.Option) (*neptune.ModifyDBClusterEndpointOutput, error) + ModifyDBClusterEndpointRequest(*neptune.ModifyDBClusterEndpointInput) (*request.Request, *neptune.ModifyDBClusterEndpointOutput) + ModifyDBClusterParameterGroup(*neptune.ModifyDBClusterParameterGroupInput) (*neptune.ResetDBClusterParameterGroupOutput, error) ModifyDBClusterParameterGroupWithContext(aws.Context, *neptune.ModifyDBClusterParameterGroupInput, ...request.Option) (*neptune.ResetDBClusterParameterGroupOutput, error) ModifyDBClusterParameterGroupRequest(*neptune.ModifyDBClusterParameterGroupInput) (*request.Request, *neptune.ResetDBClusterParameterGroupOutput) diff --git a/service/sagemaker/api.go b/service/sagemaker/api.go index 57a9f9b0aff..4254c93c049 100644 --- a/service/sagemaker/api.go +++ b/service/sagemaker/api.go @@ -348,6 +348,86 @@ func (c *SageMaker) CreateAppWithContext(ctx aws.Context, input *CreateAppInput, return out, req.Send() } +const opCreateAppImageConfig = "CreateAppImageConfig" + +// CreateAppImageConfigRequest generates a "aws/request.Request" representing the +// client's request for the CreateAppImageConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateAppImageConfig for more information on using the CreateAppImageConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateAppImageConfigRequest method. +// req, resp := client.CreateAppImageConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateAppImageConfig +func (c *SageMaker) CreateAppImageConfigRequest(input *CreateAppImageConfigInput) (req *request.Request, output *CreateAppImageConfigOutput) { + op := &request.Operation{ + Name: opCreateAppImageConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateAppImageConfigInput{} + } + + output = &CreateAppImageConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateAppImageConfig API operation for Amazon SageMaker Service. +// +// Creates a configuration for running an Amazon SageMaker image as a KernelGateway +// app. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation CreateAppImageConfig for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateAppImageConfig +func (c *SageMaker) CreateAppImageConfig(input *CreateAppImageConfigInput) (*CreateAppImageConfigOutput, error) { + req, out := c.CreateAppImageConfigRequest(input) + return out, req.Send() +} + +// CreateAppImageConfigWithContext is the same as CreateAppImageConfig with the addition of +// the ability to pass a context and additional request options. +// +// See CreateAppImageConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) CreateAppImageConfigWithContext(ctx aws.Context, input *CreateAppImageConfigInput, opts ...request.Option) (*CreateAppImageConfigOutput, error) { + req, out := c.CreateAppImageConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateAutoMLJob = "CreateAutoMLJob" // CreateAutoMLJobRequest generates a "aws/request.Request" representing the @@ -699,29 +779,12 @@ func (c *SageMaker) CreateDomainRequest(input *CreateDomainInput) (req *request. // * VpcOnly - All Studio traffic is through the specified VPC and subnets. // Internet access is disabled by default. To allow internet access, you // must specify a NAT gateway. When internet access is disabled, you won't -// be able to train or host models unless your VPC has an interface endpoint -// (PrivateLink) or a NAT gateway and your security groups allow outbound -// connections. -// -// VpcOnly network access type -// -// When you choose VpcOnly, you must specify the following: -// -// * Security group inbound and outbound rules to allow NFS traffic over -// TCP on port 2049 between the domain and the EFS volume -// -// * Security group inbound and outbound rules to allow traffic between the -// JupyterServer app and the KernelGateway apps -// -// * Interface endpoints to access the SageMaker API and SageMaker runtime -// -// For more information, see: -// -// * Security groups for your VPC (https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) -// -// * VPC with public and private subnets (NAT) (https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html) +// be able to run a Studio notebook or to train or host models unless your +// VPC has an interface endpoint to the SageMaker API and runtime or a NAT +// gateway and your security groups allow outbound connections. // -// * Connect to SageMaker through a VPC interface endpoint (https://docs.aws.amazon.com/sagemaker/latest/dg/interface-vpc-endpoint.html) +// For more information, see Connect SageMaker Studio Notebooks to Resources +// in a VPC (https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-and-internet-access.html). // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -1352,6 +1415,178 @@ func (c *SageMaker) CreateHyperParameterTuningJobWithContext(ctx aws.Context, in return out, req.Send() } +const opCreateImage = "CreateImage" + +// CreateImageRequest generates a "aws/request.Request" representing the +// client's request for the CreateImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateImage for more information on using the CreateImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateImageRequest method. +// req, resp := client.CreateImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateImage +func (c *SageMaker) CreateImageRequest(input *CreateImageInput) (req *request.Request, output *CreateImageOutput) { + op := &request.Operation{ + Name: opCreateImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateImageInput{} + } + + output = &CreateImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateImage API operation for Amazon SageMaker Service. +// +// Creates a SageMaker Image. A SageMaker image represents a set of container +// images. Each of these container images is represented by a SageMaker ImageVersion. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation CreateImage for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// * ResourceLimitExceeded +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateImage +func (c *SageMaker) CreateImage(input *CreateImageInput) (*CreateImageOutput, error) { + req, out := c.CreateImageRequest(input) + return out, req.Send() +} + +// CreateImageWithContext is the same as CreateImage with the addition of +// the ability to pass a context and additional request options. +// +// See CreateImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) CreateImageWithContext(ctx aws.Context, input *CreateImageInput, opts ...request.Option) (*CreateImageOutput, error) { + req, out := c.CreateImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opCreateImageVersion = "CreateImageVersion" + +// CreateImageVersionRequest generates a "aws/request.Request" representing the +// client's request for the CreateImageVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See CreateImageVersion for more information on using the CreateImageVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the CreateImageVersionRequest method. +// req, resp := client.CreateImageVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateImageVersion +func (c *SageMaker) CreateImageVersionRequest(input *CreateImageVersionInput) (req *request.Request, output *CreateImageVersionOutput) { + op := &request.Operation{ + Name: opCreateImageVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &CreateImageVersionInput{} + } + + output = &CreateImageVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// CreateImageVersion API operation for Amazon SageMaker Service. +// +// Creates a version of the SageMaker image specified by ImageName. The version +// represents the Amazon Container Registry (ECR) container image specified +// by BaseImage. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation CreateImageVersion for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// * ResourceLimitExceeded +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateImageVersion +func (c *SageMaker) CreateImageVersion(input *CreateImageVersionInput) (*CreateImageVersionOutput, error) { + req, out := c.CreateImageVersionRequest(input) + return out, req.Send() +} + +// CreateImageVersionWithContext is the same as CreateImageVersion with the addition of +// the ability to pass a context and additional request options. +// +// See CreateImageVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) CreateImageVersionWithContext(ctx aws.Context, input *CreateImageVersionInput, opts ...request.Option) (*CreateImageVersionOutput, error) { + req, out := c.CreateImageVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opCreateLabelingJob = "CreateLabelingJob" // CreateLabelingJobRequest generates a "aws/request.Request" representing the @@ -1632,6 +1867,16 @@ func (c *SageMaker) CreateModelPackageRequest(input *CreateModelPackageInput) (r // // See the AWS API reference guide for Amazon SageMaker Service's // API operation CreateModelPackage for usage and error information. +// +// Returned Error Types: +// * ConflictException +// There was a conflict when you attempted to modify an experiment, trial, or +// trial component. +// +// * ResourceLimitExceeded +// You have exceeded an Amazon SageMaker resource limit. For example, you might +// have too many training jobs created. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/CreateModelPackage func (c *SageMaker) CreateModelPackage(input *CreateModelPackageInput) (*CreateModelPackageOutput, error) { req, out := c.CreateModelPackageRequest(input) @@ -3077,6 +3322,86 @@ func (c *SageMaker) DeleteAppWithContext(ctx aws.Context, input *DeleteAppInput, return out, req.Send() } +const opDeleteAppImageConfig = "DeleteAppImageConfig" + +// DeleteAppImageConfigRequest generates a "aws/request.Request" representing the +// client's request for the DeleteAppImageConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteAppImageConfig for more information on using the DeleteAppImageConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteAppImageConfigRequest method. +// req, resp := client.DeleteAppImageConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteAppImageConfig +func (c *SageMaker) DeleteAppImageConfigRequest(input *DeleteAppImageConfigInput) (req *request.Request, output *DeleteAppImageConfigOutput) { + op := &request.Operation{ + Name: opDeleteAppImageConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteAppImageConfigInput{} + } + + output = &DeleteAppImageConfigOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteAppImageConfig API operation for Amazon SageMaker Service. +// +// Deletes an AppImageConfig. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DeleteAppImageConfig for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteAppImageConfig +func (c *SageMaker) DeleteAppImageConfig(input *DeleteAppImageConfigInput) (*DeleteAppImageConfigOutput, error) { + req, out := c.DeleteAppImageConfigRequest(input) + return out, req.Send() +} + +// DeleteAppImageConfigWithContext is the same as DeleteAppImageConfig with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteAppImageConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DeleteAppImageConfigWithContext(ctx aws.Context, input *DeleteAppImageConfigInput, opts ...request.Option) (*DeleteAppImageConfigOutput, error) { + req, out := c.DeleteAppImageConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteCodeRepository = "DeleteCodeRepository" // DeleteCodeRepositoryRequest generates a "aws/request.Request" representing the @@ -3650,6 +3975,174 @@ func (c *SageMaker) DeleteHumanTaskUiWithContext(ctx aws.Context, input *DeleteH return out, req.Send() } +const opDeleteImage = "DeleteImage" + +// DeleteImageRequest generates a "aws/request.Request" representing the +// client's request for the DeleteImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteImage for more information on using the DeleteImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteImageRequest method. +// req, resp := client.DeleteImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteImage +func (c *SageMaker) DeleteImageRequest(input *DeleteImageInput) (req *request.Request, output *DeleteImageOutput) { + op := &request.Operation{ + Name: opDeleteImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteImageInput{} + } + + output = &DeleteImageOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteImage API operation for Amazon SageMaker Service. +// +// Deletes a SageMaker image and all versions of the image. The container images +// aren't deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DeleteImage for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteImage +func (c *SageMaker) DeleteImage(input *DeleteImageInput) (*DeleteImageOutput, error) { + req, out := c.DeleteImageRequest(input) + return out, req.Send() +} + +// DeleteImageWithContext is the same as DeleteImage with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DeleteImageWithContext(ctx aws.Context, input *DeleteImageInput, opts ...request.Option) (*DeleteImageOutput, error) { + req, out := c.DeleteImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDeleteImageVersion = "DeleteImageVersion" + +// DeleteImageVersionRequest generates a "aws/request.Request" representing the +// client's request for the DeleteImageVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DeleteImageVersion for more information on using the DeleteImageVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DeleteImageVersionRequest method. +// req, resp := client.DeleteImageVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteImageVersion +func (c *SageMaker) DeleteImageVersionRequest(input *DeleteImageVersionInput) (req *request.Request, output *DeleteImageVersionOutput) { + op := &request.Operation{ + Name: opDeleteImageVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DeleteImageVersionInput{} + } + + output = &DeleteImageVersionOutput{} + req = c.newRequest(op, input, output) + req.Handlers.Unmarshal.Swap(jsonrpc.UnmarshalHandler.Name, protocol.UnmarshalDiscardBodyHandler) + return +} + +// DeleteImageVersion API operation for Amazon SageMaker Service. +// +// Deletes a version of a SageMaker image. The container image the version represents +// isn't deleted. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DeleteImageVersion for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteImageVersion +func (c *SageMaker) DeleteImageVersion(input *DeleteImageVersionInput) (*DeleteImageVersionOutput, error) { + req, out := c.DeleteImageVersionRequest(input) + return out, req.Send() +} + +// DeleteImageVersionWithContext is the same as DeleteImageVersion with the addition of +// the ability to pass a context and additional request options. +// +// See DeleteImageVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DeleteImageVersionWithContext(ctx aws.Context, input *DeleteImageVersionInput, opts ...request.Option) (*DeleteImageVersionOutput, error) { + req, out := c.DeleteImageVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDeleteModel = "DeleteModel" // DeleteModelRequest generates a "aws/request.Request" representing the @@ -3785,6 +4278,12 @@ func (c *SageMaker) DeleteModelPackageRequest(input *DeleteModelPackageInput) (r // // See the AWS API reference guide for Amazon SageMaker Service's // API operation DeleteModelPackage for usage and error information. +// +// Returned Error Types: +// * ConflictException +// There was a conflict when you attempted to modify an experiment, trial, or +// trial component. +// // See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DeleteModelPackage func (c *SageMaker) DeleteModelPackage(input *DeleteModelPackageInput) (*DeleteModelPackageOutput, error) { req, out := c.DeleteModelPackageRequest(input) @@ -4688,6 +5187,85 @@ func (c *SageMaker) DescribeAppWithContext(ctx aws.Context, input *DescribeAppIn return out, req.Send() } +const opDescribeAppImageConfig = "DescribeAppImageConfig" + +// DescribeAppImageConfigRequest generates a "aws/request.Request" representing the +// client's request for the DescribeAppImageConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeAppImageConfig for more information on using the DescribeAppImageConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeAppImageConfigRequest method. +// req, resp := client.DescribeAppImageConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeAppImageConfig +func (c *SageMaker) DescribeAppImageConfigRequest(input *DescribeAppImageConfigInput) (req *request.Request, output *DescribeAppImageConfigOutput) { + op := &request.Operation{ + Name: opDescribeAppImageConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeAppImageConfigInput{} + } + + output = &DescribeAppImageConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeAppImageConfig API operation for Amazon SageMaker Service. +// +// Describes an AppImageConfig. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DescribeAppImageConfig for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeAppImageConfig +func (c *SageMaker) DescribeAppImageConfig(input *DescribeAppImageConfigInput) (*DescribeAppImageConfigOutput, error) { + req, out := c.DescribeAppImageConfigRequest(input) + return out, req.Send() +} + +// DescribeAppImageConfigWithContext is the same as DescribeAppImageConfig with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeAppImageConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DescribeAppImageConfigWithContext(ctx aws.Context, input *DescribeAppImageConfigInput, opts ...request.Option) (*DescribeAppImageConfigOutput, error) { + req, out := c.DescribeAppImageConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeAutoMLJob = "DescribeAutoMLJob" // DescribeAutoMLJobRequest generates a "aws/request.Request" representing the @@ -5468,6 +6046,164 @@ func (c *SageMaker) DescribeHyperParameterTuningJobWithContext(ctx aws.Context, return out, req.Send() } +const opDescribeImage = "DescribeImage" + +// DescribeImageRequest generates a "aws/request.Request" representing the +// client's request for the DescribeImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeImage for more information on using the DescribeImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeImageRequest method. +// req, resp := client.DescribeImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeImage +func (c *SageMaker) DescribeImageRequest(input *DescribeImageInput) (req *request.Request, output *DescribeImageOutput) { + op := &request.Operation{ + Name: opDescribeImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeImageInput{} + } + + output = &DescribeImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeImage API operation for Amazon SageMaker Service. +// +// Describes a SageMaker image. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DescribeImage for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeImage +func (c *SageMaker) DescribeImage(input *DescribeImageInput) (*DescribeImageOutput, error) { + req, out := c.DescribeImageRequest(input) + return out, req.Send() +} + +// DescribeImageWithContext is the same as DescribeImage with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DescribeImageWithContext(ctx aws.Context, input *DescribeImageInput, opts ...request.Option) (*DescribeImageOutput, error) { + req, out := c.DescribeImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +const opDescribeImageVersion = "DescribeImageVersion" + +// DescribeImageVersionRequest generates a "aws/request.Request" representing the +// client's request for the DescribeImageVersion operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See DescribeImageVersion for more information on using the DescribeImageVersion +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the DescribeImageVersionRequest method. +// req, resp := client.DescribeImageVersionRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeImageVersion +func (c *SageMaker) DescribeImageVersionRequest(input *DescribeImageVersionInput) (req *request.Request, output *DescribeImageVersionOutput) { + op := &request.Operation{ + Name: opDescribeImageVersion, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &DescribeImageVersionInput{} + } + + output = &DescribeImageVersionOutput{} + req = c.newRequest(op, input, output) + return +} + +// DescribeImageVersion API operation for Amazon SageMaker Service. +// +// Describes a version of a SageMaker image. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation DescribeImageVersion for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/DescribeImageVersion +func (c *SageMaker) DescribeImageVersion(input *DescribeImageVersionInput) (*DescribeImageVersionOutput, error) { + req, out := c.DescribeImageVersionRequest(input) + return out, req.Send() +} + +// DescribeImageVersionWithContext is the same as DescribeImageVersion with the addition of +// the ability to pass a context and additional request options. +// +// See DescribeImageVersion for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) DescribeImageVersionWithContext(ctx aws.Context, input *DescribeImageVersionInput, opts ...request.Option) (*DescribeImageVersionOutput, error) { + req, out := c.DescribeImageVersionRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opDescribeLabelingJob = "DescribeLabelingJob" // DescribeLabelingJobRequest generates a "aws/request.Request" representing the @@ -6929,6 +7665,82 @@ func (c *SageMaker) ListAlgorithmsPagesWithContext(ctx aws.Context, input *ListA return p.Err() } +const opListAppImageConfigs = "ListAppImageConfigs" + +// ListAppImageConfigsRequest generates a "aws/request.Request" representing the +// client's request for the ListAppImageConfigs operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListAppImageConfigs for more information on using the ListAppImageConfigs +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListAppImageConfigsRequest method. +// req, resp := client.ListAppImageConfigsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListAppImageConfigs +func (c *SageMaker) ListAppImageConfigsRequest(input *ListAppImageConfigsInput) (req *request.Request, output *ListAppImageConfigsOutput) { + op := &request.Operation{ + Name: opListAppImageConfigs, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &ListAppImageConfigsInput{} + } + + output = &ListAppImageConfigsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListAppImageConfigs API operation for Amazon SageMaker Service. +// +// Lists the AppImageConfigs in your account and their properties. The list +// can be filtered by creation time or modified time, and whether the AppImageConfig +// name contains a specified string. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation ListAppImageConfigs for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListAppImageConfigs +func (c *SageMaker) ListAppImageConfigs(input *ListAppImageConfigsInput) (*ListAppImageConfigsOutput, error) { + req, out := c.ListAppImageConfigsRequest(input) + return out, req.Send() +} + +// ListAppImageConfigsWithContext is the same as ListAppImageConfigs with the addition of +// the ability to pass a context and additional request options. +// +// See ListAppImageConfigs for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListAppImageConfigsWithContext(ctx aws.Context, input *ListAppImageConfigsInput, opts ...request.Option) (*ListAppImageConfigsOutput, error) { + req, out := c.ListAppImageConfigsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opListApps = "ListApps" // ListAppsRequest generates a "aws/request.Request" representing the @@ -8524,6 +9336,278 @@ func (c *SageMaker) ListHyperParameterTuningJobsPagesWithContext(ctx aws.Context return p.Err() } +const opListImageVersions = "ListImageVersions" + +// ListImageVersionsRequest generates a "aws/request.Request" representing the +// client's request for the ListImageVersions operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListImageVersions for more information on using the ListImageVersions +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListImageVersionsRequest method. +// req, resp := client.ListImageVersionsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListImageVersions +func (c *SageMaker) ListImageVersionsRequest(input *ListImageVersionsInput) (req *request.Request, output *ListImageVersionsOutput) { + op := &request.Operation{ + Name: opListImageVersions, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListImageVersionsInput{} + } + + output = &ListImageVersionsOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListImageVersions API operation for Amazon SageMaker Service. +// +// Lists the versions of a specified image and their properties. The list can +// be filtered by creation time or modified time. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation ListImageVersions for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListImageVersions +func (c *SageMaker) ListImageVersions(input *ListImageVersionsInput) (*ListImageVersionsOutput, error) { + req, out := c.ListImageVersionsRequest(input) + return out, req.Send() +} + +// ListImageVersionsWithContext is the same as ListImageVersions with the addition of +// the ability to pass a context and additional request options. +// +// See ListImageVersions for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListImageVersionsWithContext(ctx aws.Context, input *ListImageVersionsInput, opts ...request.Option) (*ListImageVersionsOutput, error) { + req, out := c.ListImageVersionsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListImageVersionsPages iterates over the pages of a ListImageVersions operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListImageVersions method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListImageVersions operation. +// pageNum := 0 +// err := client.ListImageVersionsPages(params, +// func(page *sagemaker.ListImageVersionsOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SageMaker) ListImageVersionsPages(input *ListImageVersionsInput, fn func(*ListImageVersionsOutput, bool) bool) error { + return c.ListImageVersionsPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListImageVersionsPagesWithContext same as ListImageVersionsPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListImageVersionsPagesWithContext(ctx aws.Context, input *ListImageVersionsInput, fn func(*ListImageVersionsOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListImageVersionsInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListImageVersionsRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListImageVersionsOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + +const opListImages = "ListImages" + +// ListImagesRequest generates a "aws/request.Request" representing the +// client's request for the ListImages operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See ListImages for more information on using the ListImages +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the ListImagesRequest method. +// req, resp := client.ListImagesRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListImages +func (c *SageMaker) ListImagesRequest(input *ListImagesInput) (req *request.Request, output *ListImagesOutput) { + op := &request.Operation{ + Name: opListImages, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &request.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListImagesInput{} + } + + output = &ListImagesOutput{} + req = c.newRequest(op, input, output) + return +} + +// ListImages API operation for Amazon SageMaker Service. +// +// Lists the images in your account and their properties. The list can be filtered +// by creation time or modified time, and whether the image name contains a +// specified string. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation ListImages for usage and error information. +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/ListImages +func (c *SageMaker) ListImages(input *ListImagesInput) (*ListImagesOutput, error) { + req, out := c.ListImagesRequest(input) + return out, req.Send() +} + +// ListImagesWithContext is the same as ListImages with the addition of +// the ability to pass a context and additional request options. +// +// See ListImages for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListImagesWithContext(ctx aws.Context, input *ListImagesInput, opts ...request.Option) (*ListImagesOutput, error) { + req, out := c.ListImagesRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + +// ListImagesPages iterates over the pages of a ListImages operation, +// calling the "fn" function with the response data for each page. To stop +// iterating, return false from the fn function. +// +// See ListImages method for more information on how to use this operation. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over at most 3 pages of a ListImages operation. +// pageNum := 0 +// err := client.ListImagesPages(params, +// func(page *sagemaker.ListImagesOutput, lastPage bool) bool { +// pageNum++ +// fmt.Println(page) +// return pageNum <= 3 +// }) +// +func (c *SageMaker) ListImagesPages(input *ListImagesInput, fn func(*ListImagesOutput, bool) bool) error { + return c.ListImagesPagesWithContext(aws.BackgroundContext(), input, fn) +} + +// ListImagesPagesWithContext same as ListImagesPages except +// it takes a Context and allows setting request options on the pages. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) ListImagesPagesWithContext(ctx aws.Context, input *ListImagesInput, fn func(*ListImagesOutput, bool) bool, opts ...request.Option) error { + p := request.Pagination{ + NewRequest: func() (*request.Request, error) { + var inCpy *ListImagesInput + if input != nil { + tmp := *input + inCpy = &tmp + } + req, _ := c.ListImagesRequest(inCpy) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return req, nil + }, + } + + for p.Next() { + if !fn(p.Page().(*ListImagesOutput), !p.HasNextPage()) { + break + } + } + + return p.Err() +} + const opListLabelingJobs = "ListLabelingJobs" // ListLabelingJobsRequest generates a "aws/request.Request" representing the @@ -12209,6 +13293,85 @@ func (c *SageMaker) StopTransformJobWithContext(ctx aws.Context, input *StopTran return out, req.Send() } +const opUpdateAppImageConfig = "UpdateAppImageConfig" + +// UpdateAppImageConfigRequest generates a "aws/request.Request" representing the +// client's request for the UpdateAppImageConfig operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateAppImageConfig for more information on using the UpdateAppImageConfig +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateAppImageConfigRequest method. +// req, resp := client.UpdateAppImageConfigRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateAppImageConfig +func (c *SageMaker) UpdateAppImageConfigRequest(input *UpdateAppImageConfigInput) (req *request.Request, output *UpdateAppImageConfigOutput) { + op := &request.Operation{ + Name: opUpdateAppImageConfig, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateAppImageConfigInput{} + } + + output = &UpdateAppImageConfigOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateAppImageConfig API operation for Amazon SageMaker Service. +// +// Updates the properties of an AppImageConfig. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateAppImageConfig for usage and error information. +// +// Returned Error Types: +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateAppImageConfig +func (c *SageMaker) UpdateAppImageConfig(input *UpdateAppImageConfigInput) (*UpdateAppImageConfigOutput, error) { + req, out := c.UpdateAppImageConfigRequest(input) + return out, req.Send() +} + +// UpdateAppImageConfigWithContext is the same as UpdateAppImageConfig with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateAppImageConfig for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateAppImageConfigWithContext(ctx aws.Context, input *UpdateAppImageConfigInput, opts ...request.Option) (*UpdateAppImageConfigOutput, error) { + req, out := c.UpdateAppImageConfigRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateCodeRepository = "UpdateCodeRepository" // UpdateCodeRepositoryRequest generates a "aws/request.Request" representing the @@ -12631,6 +13794,89 @@ func (c *SageMaker) UpdateExperimentWithContext(ctx aws.Context, input *UpdateEx return out, req.Send() } +const opUpdateImage = "UpdateImage" + +// UpdateImageRequest generates a "aws/request.Request" representing the +// client's request for the UpdateImage operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See UpdateImage for more information on using the UpdateImage +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the UpdateImageRequest method. +// req, resp := client.UpdateImageRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateImage +func (c *SageMaker) UpdateImageRequest(input *UpdateImageInput) (req *request.Request, output *UpdateImageOutput) { + op := &request.Operation{ + Name: opUpdateImage, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &UpdateImageInput{} + } + + output = &UpdateImageOutput{} + req = c.newRequest(op, input, output) + return +} + +// UpdateImage API operation for Amazon SageMaker Service. +// +// Updates the properties of a SageMaker image. To change the image's tags, +// use the AddTags and DeleteTags APIs. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for Amazon SageMaker Service's +// API operation UpdateImage for usage and error information. +// +// Returned Error Types: +// * ResourceInUse +// Resource being accessed is in use. +// +// * ResourceNotFound +// Resource being access is not found. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/sagemaker-2017-07-24/UpdateImage +func (c *SageMaker) UpdateImage(input *UpdateImageInput) (*UpdateImageOutput, error) { + req, out := c.UpdateImageRequest(input) + return out, req.Send() +} + +// UpdateImageWithContext is the same as UpdateImage with the addition of +// the ability to pass a context and additional request options. +// +// See UpdateImage for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *SageMaker) UpdateImageWithContext(ctx aws.Context, input *UpdateImageInput, opts ...request.Option) (*UpdateImageOutput, error) { + req, out := c.UpdateImageRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opUpdateMonitoringSchedule = "UpdateMonitoringSchedule" // UpdateMonitoringScheduleRequest generates a "aws/request.Request" representing the @@ -14252,7 +15498,7 @@ func (s *AnnotationConsolidationConfig) SetAnnotationConsolidationLambdaArn(v st return s } -// The app's details. +// Details about an Amazon SageMaker app. type AppDetails struct { _ struct{} `type:"structure"` @@ -14321,6 +15567,67 @@ func (s *AppDetails) SetUserProfileName(v string) *AppDetails { return s } +// The configuration for running an Amazon SageMaker image as a KernelGateway +// app. +type AppImageConfigDetails struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the AppImageConfig. + AppImageConfigArn *string `type:"string"` + + // The name of the AppImageConfig. + AppImageConfigName *string `type:"string"` + + // When the AppImageConfig was created. + CreationTime *time.Time `type:"timestamp"` + + // The KernelGateway app. + KernelGatewayImageConfig *KernelGatewayImageConfig `type:"structure"` + + // When the AppImageConfig was last modified. + LastModifiedTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s AppImageConfigDetails) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s AppImageConfigDetails) GoString() string { + return s.String() +} + +// SetAppImageConfigArn sets the AppImageConfigArn field's value. +func (s *AppImageConfigDetails) SetAppImageConfigArn(v string) *AppImageConfigDetails { + s.AppImageConfigArn = &v + return s +} + +// SetAppImageConfigName sets the AppImageConfigName field's value. +func (s *AppImageConfigDetails) SetAppImageConfigName(v string) *AppImageConfigDetails { + s.AppImageConfigName = &v + return s +} + +// SetCreationTime sets the CreationTime field's value. +func (s *AppImageConfigDetails) SetCreationTime(v time.Time) *AppImageConfigDetails { + s.CreationTime = &v + return s +} + +// SetKernelGatewayImageConfig sets the KernelGatewayImageConfig field's value. +func (s *AppImageConfigDetails) SetKernelGatewayImageConfig(v *KernelGatewayImageConfig) *AppImageConfigDetails { + s.KernelGatewayImageConfig = v + return s +} + +// SetLastModifiedTime sets the LastModifiedTime field's value. +func (s *AppImageConfigDetails) SetLastModifiedTime(v time.Time) *AppImageConfigDetails { + s.LastModifiedTime = &v + return s +} + // Configuration to run a processing job in a specified container image. type AppSpecification struct { _ struct{} `type:"structure"` @@ -16700,6 +18007,100 @@ func (s *CreateAlgorithmOutput) SetAlgorithmArn(v string) *CreateAlgorithmOutput return s } +type CreateAppImageConfigInput struct { + _ struct{} `type:"structure"` + + // The name of the AppImageConfig. Must be unique to your account. + // + // AppImageConfigName is a required field + AppImageConfigName *string `type:"string" required:"true"` + + // The KernelGatewayImageConfig. + KernelGatewayImageConfig *KernelGatewayImageConfig `type:"structure"` + + // A list of tags to apply to the AppImageConfig. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s CreateAppImageConfigInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAppImageConfigInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateAppImageConfigInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateAppImageConfigInput"} + if s.AppImageConfigName == nil { + invalidParams.Add(request.NewErrParamRequired("AppImageConfigName")) + } + if s.KernelGatewayImageConfig != nil { + if err := s.KernelGatewayImageConfig.Validate(); err != nil { + invalidParams.AddNested("KernelGatewayImageConfig", err.(request.ErrInvalidParams)) + } + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetAppImageConfigName sets the AppImageConfigName field's value. +func (s *CreateAppImageConfigInput) SetAppImageConfigName(v string) *CreateAppImageConfigInput { + s.AppImageConfigName = &v + return s +} + +// SetKernelGatewayImageConfig sets the KernelGatewayImageConfig field's value. +func (s *CreateAppImageConfigInput) SetKernelGatewayImageConfig(v *KernelGatewayImageConfig) *CreateAppImageConfigInput { + s.KernelGatewayImageConfig = v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateAppImageConfigInput) SetTags(v []*Tag) *CreateAppImageConfigInput { + s.Tags = v + return s +} + +type CreateAppImageConfigOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the AppImageConfig. + AppImageConfigArn *string `type:"string"` +} + +// String returns the string representation +func (s CreateAppImageConfigOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateAppImageConfigOutput) GoString() string { + return s.String() +} + +// SetAppImageConfigArn sets the AppImageConfigArn field's value. +func (s *CreateAppImageConfigOutput) SetAppImageConfigArn(v string) *CreateAppImageConfigOutput { + s.AppImageConfigArn = &v + return s +} + type CreateAppInput struct { _ struct{} `type:"structure"` @@ -16813,7 +18214,7 @@ func (s *CreateAppInput) SetUserProfileName(v string) *CreateAppInput { type CreateAppOutput struct { _ struct{} `type:"structure"` - // The App's Amazon Resource Name (ARN). + // The Amazon Resource Name (ARN) of the app. AppArn *string `type:"string"` } @@ -17169,6 +18570,12 @@ type CreateCompilationJobInput struct { // // StoppingCondition is a required field StoppingCondition *StoppingCondition `type:"structure" required:"true"` + + // An array of key-value pairs that you want to use to organize and track your + // AWS resource costs. For more information, see Using Cost Allocation Tags + // (https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html#allocation-what) + // in the AWS Billing and Cost Management User Guide. + Tags []*Tag `type:"list"` } // String returns the string representation @@ -17220,6 +18627,16 @@ func (s *CreateCompilationJobInput) Validate() error { invalidParams.AddNested("StoppingCondition", err.(request.ErrInvalidParams)) } } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -17257,6 +18674,12 @@ func (s *CreateCompilationJobInput) SetStoppingCondition(v *StoppingCondition) * return s } +// SetTags sets the Tags field's value. +func (s *CreateCompilationJobInput) SetTags(v []*Tag) *CreateCompilationJobInput { + s.Tags = v + return s +} + type CreateCompilationJobOutput struct { _ struct{} `type:"structure"` @@ -17638,8 +19061,9 @@ type CreateEndpointInput struct { // EndpointConfigName is a required field EndpointConfigName *string `type:"string" required:"true"` - // The name of the endpoint. The name must be unique within an AWS Region in - // your AWS account. + // The name of the endpoint.The name must be unique within an AWS Region in + // your AWS account. The name is case-insensitive in CreateEndpoint, but the + // case is preserved and must be matched in . // // EndpointName is a required field EndpointName *string `type:"string" required:"true"` @@ -18130,7 +19554,7 @@ type CreateHyperParameterTuningJobInput struct { // The name of the tuning job. This name is the prefix for the names of all // training jobs that this tuning job launches. The name must be unique within - // the same AWS account and AWS Region. The name must have { } to { } characters. + // the same AWS account and AWS Region. The name must have 1 to 32 characters. // Valid characters are a-z, A-Z, 0-9, and : + = @ _ % - (hyphen). The name // is not case sensitive. // @@ -18303,6 +19727,230 @@ func (s *CreateHyperParameterTuningJobOutput) SetHyperParameterTuningJobArn(v st return s } +type CreateImageInput struct { + _ struct{} `type:"structure"` + + // The description of the image. + Description *string `min:"1" type:"string"` + + // The display name of the image. When the image is added to a domain, DisplayName + // must be unique to the domain. + DisplayName *string `min:"1" type:"string"` + + // The name of the image. Must be unique to your account. + // + // ImageName is a required field + ImageName *string `min:"1" type:"string" required:"true"` + + // The Amazon Resource Name (ARN) of an IAM role that enables Amazon SageMaker + // to perform tasks on your behalf. + // + // RoleArn is a required field + RoleArn *string `min:"20" type:"string" required:"true"` + + // A list of tags to apply to the image. + Tags []*Tag `type:"list"` +} + +// String returns the string representation +func (s CreateImageInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImageInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateImageInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "CreateImageInput"} + if s.Description != nil && len(*s.Description) < 1 { + invalidParams.Add(request.NewErrParamMinLen("Description", 1)) + } + if s.DisplayName != nil && len(*s.DisplayName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("DisplayName", 1)) + } + if s.ImageName == nil { + invalidParams.Add(request.NewErrParamRequired("ImageName")) + } + if s.ImageName != nil && len(*s.ImageName) < 1 { + invalidParams.Add(request.NewErrParamMinLen("ImageName", 1)) + } + if s.RoleArn == nil { + invalidParams.Add(request.NewErrParamRequired("RoleArn")) + } + if s.RoleArn != nil && len(*s.RoleArn) < 20 { + invalidParams.Add(request.NewErrParamMinLen("RoleArn", 20)) + } + if s.Tags != nil { + for i, v := range s.Tags { + if v == nil { + continue + } + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "Tags", i), err.(request.ErrInvalidParams)) + } + } + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetDescription sets the Description field's value. +func (s *CreateImageInput) SetDescription(v string) *CreateImageInput { + s.Description = &v + return s +} + +// SetDisplayName sets the DisplayName field's value. +func (s *CreateImageInput) SetDisplayName(v string) *CreateImageInput { + s.DisplayName = &v + return s +} + +// SetImageName sets the ImageName field's value. +func (s *CreateImageInput) SetImageName(v string) *CreateImageInput { + s.ImageName = &v + return s +} + +// SetRoleArn sets the RoleArn field's value. +func (s *CreateImageInput) SetRoleArn(v string) *CreateImageInput { + s.RoleArn = &v + return s +} + +// SetTags sets the Tags field's value. +func (s *CreateImageInput) SetTags(v []*Tag) *CreateImageInput { + s.Tags = v + return s +} + +type CreateImageOutput struct { + _ struct{} `type:"structure"` + + // The Amazon Resource Name (ARN) of the image. + ImageArn *string `type:"string"` +} + +// String returns the string representation +func (s CreateImageOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s CreateImageOutput) GoString() string { + return s.String() +} + +// SetImageArn sets the ImageArn field's value. +func (s *CreateImageOutput) SetImageArn(v string) *CreateImageOutput { + s.ImageArn = &v + return s +} + +type CreateImageVersionInput struct { + _ struct{} `type:"structure"` + + // The registry path of the container image to use as the starting point for + // this version. The path is an Amazon Container Registry (ECR) URI in the following + // format: + // + //