Skip to content

Commit

Permalink
[DataFactory] Update SDK from swagger. (Azure#7582)
Browse files Browse the repository at this point in the history
* [DataFactory] Update SDK from swagger.

* Update changelogs; add metadata.txt from SDK generation.

* Update metadata.

* Changes from code review feedback.
  • Loading branch information
mdo98 authored and weshaggard committed Sep 17, 2019
1 parent 208c945 commit 4ac2114
Show file tree
Hide file tree
Showing 7 changed files with 91 additions and 54 deletions.
6 changes: 3 additions & 3 deletions eng/mgmt/mgmtmetadata/datafactory_resource-manager.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ AutoRest installed successfully.
Commencing code generation
Generating CSharp code
Executing AutoRest command
cmd.exe /c autorest.cmd https://github.com/Azure/azure-rest-api-specs/blob/master/specification/datafactory/resource-manager/readme.md --csharp --version=latest --reflect-api-versions --tag=package-2018-06 --csharp-sdks-folder=C:\Users\zhenqxu\Source\Repos\azure-sdk-for-net\sdk
2019-09-11 09:24:33 UTC
cmd.exe /c autorest.cmd https://github.com/Azure/azure-rest-api-specs/blob/master/specification/datafactory/resource-manager/readme.md --csharp --version=latest --reflect-api-versions --tag=package-2018-06 --csharp-sdks-folder=E:\src\repos\github\mdo98\azure-sdk-for-net\sdk
2019-09-16 18:30:18 UTC
Azure-rest-api-specs repository information
GitHub fork: Azure
Branch: master
Commit: 49a38e6bc534fec5b09a245568c8f66e9d3acb2c
Commit: e0d9ceb1ca85e76a31b8496201499a7312824c09
AutoRest information
Requested version: latest
Bootstrapper version: autorest@2.0.4283
14 changes: 0 additions & 14 deletions sdk/SDKs/_metadata/datafactory_resource-manager.txt

This file was deleted.

Original file line number Diff line number Diff line change
Expand Up @@ -51,42 +51,61 @@ public AzureDatabricksLinkedService()
/// <param name="parameters">Parameters for linked service.</param>
/// <param name="annotations">List of tags that can be used for
/// describing the linked service.</param>
/// <param name="existingClusterId">The id of an existing cluster that
/// will be used for all runs of this job. Type: string (or Expression
/// <param name="existingClusterId">The id of an existing interactive
/// cluster that will be used for all runs of this activity. Type:
/// string (or Expression with resultType string).</param>
/// <param name="instancePoolId">The id of an existing instance pool
/// that will be used for all runs of this activity. Type: string (or
/// Expression with resultType string).</param>
/// <param name="newClusterVersion">If not using an existing
/// interactive cluster, this specifies the Spark version of a new job
/// cluster or instance pool nodes created for each run of this
/// activity. Required if instancePoolId is specified. Type: string (or
/// Expression with resultType string).</param>
/// <param name="newClusterNumOfWorker">If not using an existing
/// interactive cluster, this specifies the number of worker nodes to
/// use for the new job cluster or instance pool. For new job clusters,
/// this a string-formatted Int32, like '1' means numOfWorker is 1 or
/// '1:10' means auto-scale from 1 (min) to 10 (max). For instance
/// pools, this is a string-formatted Int32, and can only specify a
/// fixed number of worker nodes, such as '2'. Required if
/// newClusterVersion is specified. Type: string (or Expression with
/// resultType string).</param>
/// <param name="newClusterNodeType">The node type of the new job
/// cluster. This property is required if newClusterVersion is
/// specified and instancePoolId is not specified. If instancePoolId is
/// specified, this property is ignored. Type: string (or Expression
/// with resultType string).</param>
/// <param name="newClusterVersion">The Spark version of new cluster.
/// Type: string (or Expression with resultType string).</param>
/// <param name="newClusterNumOfWorker">Number of worker nodes that new
/// cluster should have. A string formatted Int32, like '1' means
/// numOfWorker is 1 or '1:10' means auto-scale from 1 as min and 10 as
/// max. Type: string (or Expression with resultType string).</param>
/// <param name="newClusterNodeType">The node types of new cluster.
/// Type: string (or Expression with resultType string).</param>
/// <param name="newClusterSparkConf">A set of optional, user-specified
/// Spark configuration key-value pairs.</param>
/// <param name="newClusterSparkEnvVars">A set of optional,
/// user-specified Spark environment variables key-value pairs.</param>
/// <param name="newClusterCustomTags">Additional tags for cluster
/// resources.</param>
/// resources. This property is ignored in instance pool
/// configurations.</param>
/// <param name="newClusterDriverNodeType">The driver node type for the
/// new cluster. Type: string (or Expression with resultType
/// new job cluster. This property is ignored in instance pool
/// configurations. Type: string (or Expression with resultType
/// string).</param>
/// <param name="newClusterInitScripts">User-defined initialization
/// scripts for the new cluster. Type: array of strings (or Expression
/// with resultType array of strings).</param>
/// <param name="newClusterEnableElasticDisk">Enable the elastic disk
/// on the new cluster. Type: boolean (or Expression with resultType
/// on the new cluster. This property is now ignored, and takes the
/// default elastic disk behavior in Databricks (elastic disks are
/// always enabled). Type: boolean (or Expression with resultType
/// boolean).</param>
/// <param name="encryptedCredential">The encrypted credential used for
/// authentication. Credentials are encrypted using the integration
/// runtime credential manager. Type: string (or Expression with
/// resultType string).</param>
public AzureDatabricksLinkedService(object domain, SecretBase accessToken, IDictionary<string, object> additionalProperties = default(IDictionary<string, object>), IntegrationRuntimeReference connectVia = default(IntegrationRuntimeReference), string description = default(string), IDictionary<string, ParameterSpecification> parameters = default(IDictionary<string, ParameterSpecification>), IList<object> annotations = default(IList<object>), object existingClusterId = default(object), object newClusterVersion = default(object), object newClusterNumOfWorker = default(object), object newClusterNodeType = default(object), IDictionary<string, object> newClusterSparkConf = default(IDictionary<string, object>), IDictionary<string, object> newClusterSparkEnvVars = default(IDictionary<string, object>), IDictionary<string, object> newClusterCustomTags = default(IDictionary<string, object>), object newClusterDriverNodeType = default(object), object newClusterInitScripts = default(object), object newClusterEnableElasticDisk = default(object), object encryptedCredential = default(object))
public AzureDatabricksLinkedService(object domain, SecretBase accessToken, IDictionary<string, object> additionalProperties = default(IDictionary<string, object>), IntegrationRuntimeReference connectVia = default(IntegrationRuntimeReference), string description = default(string), IDictionary<string, ParameterSpecification> parameters = default(IDictionary<string, ParameterSpecification>), IList<object> annotations = default(IList<object>), object existingClusterId = default(object), object instancePoolId = default(object), object newClusterVersion = default(object), object newClusterNumOfWorker = default(object), object newClusterNodeType = default(object), IDictionary<string, object> newClusterSparkConf = default(IDictionary<string, object>), IDictionary<string, object> newClusterSparkEnvVars = default(IDictionary<string, object>), IDictionary<string, object> newClusterCustomTags = default(IDictionary<string, object>), object newClusterDriverNodeType = default(object), object newClusterInitScripts = default(object), object newClusterEnableElasticDisk = default(object), object encryptedCredential = default(object))
: base(additionalProperties, connectVia, description, parameters, annotations)
{
Domain = domain;
AccessToken = accessToken;
ExistingClusterId = existingClusterId;
InstancePoolId = instancePoolId;
NewClusterVersion = newClusterVersion;
NewClusterNumOfWorker = newClusterNumOfWorker;
NewClusterNodeType = newClusterNodeType;
Expand Down Expand Up @@ -122,32 +141,49 @@ public AzureDatabricksLinkedService()
public SecretBase AccessToken { get; set; }

/// <summary>
/// Gets or sets the id of an existing cluster that will be used for
/// all runs of this job. Type: string (or Expression with resultType
/// string).
/// Gets or sets the id of an existing interactive cluster that will be
/// used for all runs of this activity. Type: string (or Expression
/// with resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.existingClusterId")]
public object ExistingClusterId { get; set; }

/// <summary>
/// Gets or sets the Spark version of new cluster. Type: string (or
/// Expression with resultType string).
/// Gets or sets the id of an existing instance pool that will be used
/// for all runs of this activity. Type: string (or Expression with
/// resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.instancePoolId")]
public object InstancePoolId { get; set; }

/// <summary>
/// Gets or sets if not using an existing interactive cluster, this
/// specifies the Spark version of a new job cluster or instance pool
/// nodes created for each run of this activity. Required if
/// instancePoolId is specified. Type: string (or Expression with
/// resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterVersion")]
public object NewClusterVersion { get; set; }

/// <summary>
/// Gets or sets number of worker nodes that new cluster should have. A
/// string formatted Int32, like '1' means numOfWorker is 1 or '1:10'
/// means auto-scale from 1 as min and 10 as max. Type: string (or
/// Expression with resultType string).
/// Gets or sets if not using an existing interactive cluster, this
/// specifies the number of worker nodes to use for the new job cluster
/// or instance pool. For new job clusters, this a string-formatted
/// Int32, like '1' means numOfWorker is 1 or '1:10' means auto-scale
/// from 1 (min) to 10 (max). For instance pools, this is a
/// string-formatted Int32, and can only specify a fixed number of
/// worker nodes, such as '2'. Required if newClusterVersion is
/// specified. Type: string (or Expression with resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterNumOfWorker")]
public object NewClusterNumOfWorker { get; set; }

/// <summary>
/// Gets or sets the node types of new cluster. Type: string (or
/// Expression with resultType string).
/// Gets or sets the node type of the new job cluster. This property is
/// required if newClusterVersion is specified and instancePoolId is
/// not specified. If instancePoolId is specified, this property is
/// ignored. Type: string (or Expression with resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterNodeType")]
public object NewClusterNodeType { get; set; }
Expand All @@ -167,13 +203,15 @@ public AzureDatabricksLinkedService()
public IDictionary<string, object> NewClusterSparkEnvVars { get; set; }

/// <summary>
/// Gets or sets additional tags for cluster resources.
/// Gets or sets additional tags for cluster resources. This property
/// is ignored in instance pool configurations.
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterCustomTags")]
public IDictionary<string, object> NewClusterCustomTags { get; set; }

/// <summary>
/// Gets or sets the driver node type for the new cluster. Type: string
/// Gets or sets the driver node type for the new job cluster. This
/// property is ignored in instance pool configurations. Type: string
/// (or Expression with resultType string).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterDriverNodeType")]
Expand All @@ -188,7 +226,9 @@ public AzureDatabricksLinkedService()
public object NewClusterInitScripts { get; set; }

/// <summary>
/// Gets or sets enable the elastic disk on the new cluster. Type:
/// Gets or sets enable the elastic disk on the new cluster. This
/// property is now ignored, and takes the default elastic disk
/// behavior in Databricks (elastic disks are always enabled). Type:
/// boolean (or Expression with resultType boolean).
/// </summary>
[JsonProperty(PropertyName = "typeProperties.newClusterEnableElasticDisk")]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ public static IEnumerable<Tuple<string, string, string>> ApiInfo_DataFactoryMana
// BEGIN: Code Generation Metadata Section
public static readonly String AutoRestVersion = "latest";
public static readonly String AutoRestBootStrapperVersion = "autorest@2.0.4283";
public static readonly String AutoRestCmdExecuted = "cmd.exe /c autorest.cmd https://github.com/Azure/azure-rest-api-specs/blob/master/specification/datafactory/resource-manager/readme.md --csharp --version=latest --reflect-api-versions --tag=package-2018-06 --csharp-sdks-folder=C:\\Users\\zhenqxu\\Source\\Repos\\azure-sdk-for-net\\sdk";
public static readonly String AutoRestCmdExecuted = "cmd.exe /c autorest.cmd https://github.com/Azure/azure-rest-api-specs/blob/master/specification/datafactory/resource-manager/readme.md --csharp --version=latest --reflect-api-versions --tag=package-2018-06 --csharp-sdks-folder=E:\\src\\repos\\github\\azure-sdk-for-net\\sdk";
public static readonly String GithubForkName = "Azure";
public static readonly String GithubBranchName = "master";
public static readonly String GithubCommidId = "49a38e6bc534fec5b09a245568c8f66e9d3acb2c";
public static readonly String GithubCommidId = "e0d9ceb1ca85e76a31b8496201499a7312824c09";
public static readonly String CodeGenerationErrors = "";
public static readonly String GithubRepoName = "azure-rest-api-specs";
// END: Code Generation Metadata Section
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,20 @@
<PropertyGroup>
<PackageId>Microsoft.Azure.Management.DataFactory</PackageId>
<Description>Azure Data Factory V2 is the data integration platform that goes beyond Azure Data Factory V1's orchestration and batch-processing of time-series data, with a general purpose app model supporting modern data warehousing patterns and scenarios, lift-and-shift SSIS, and data-driven SaaS applications. Compose and manage reliable and secure data integration workflows at scale. Use native ADF data connectors and Integration Runtimes to move and transform cloud and on-premises data that can be unstructured, semi-structured, and structured with Hadoop, Azure Data Lake, Spark, SQL Server, Cosmos DB and many other data platforms.</Description>
<Version>4.1.4</Version>
<Version>4.1.3</Version>
<AssemblyName>Microsoft.Azure.Management.DataFactory</AssemblyName>
<PackageTags>Microsoft Azure resource management;Data Factory;ADF;</PackageTags>
<PackageReleaseNotes>
<![CDATA[
- Added RunDimension to Pipeline
- Added Chaining Trigger
- Added Dependency Status property in Pipeline and Trigger Run
- Added outputColumns property to Office365Source
- Added support for ORC dataset in ADF
- Added alternateKeyName property to DynamicsSink, DynamicsCrmSink and CommonDataServiceForAppsSinkCommonDataServiceForAppsSink
- Added property blockSizeInMB to Blobsink and BlobFSsink
- Added support for custom setup without script properties for a SSIS integration runtime
- Added support for instance pools in Databricks linked service
- Added outputColumns property to Office365Source
- Added support for ORC dataset in ADF
- Added alternateKeyName property to DynamicsSink, DynamicsCrmSink and CommonDataServiceForAppsSinkCommonDataServiceForAppsSink
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
[assembly: AssemblyTitle("Microsoft Azure Data Factory Management Library")]
[assembly: AssemblyDescription("Provides management functionality for Microsoft Azure Data Factory Resources.")]
[assembly: AssemblyVersion("4.1.0.0")]
[assembly: AssemblyFileVersion("4.1.4.0")]
[assembly: AssemblyFileVersion("4.1.3.0")]
[assembly: AssemblyConfiguration("")]
[assembly: AssemblyCompany("Microsoft")]
[assembly: AssemblyProduct("Microsoft Azure .NET SDK")]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,18 @@

## Current version
### Feature Additions
- Added outputColumns property to Office365Source
- Added support for ORC dataset in ADF
- Added alternateKeyName property to DynamicsSink, DynamicsCrmSink and CommonDataServiceForAppsSinkCommonDataServiceForAppsSink


## Version 4.1.3
### Feature Additions
- Added RunDimnesion to Pipeline
- Added RunDimension to Pipeline
- Added Chaining Trigger
- Added Dependency Status property in Pipeline and Trigger Run
- Added outputColumns property to Office365Source
- Added support for ORC dataset in ADF
- Added alternateKeyName property to DynamicsSink, DynamicsCrmSink and CommonDataServiceForAppsSinkCommonDataServiceForAppsSink
- Added property blockSizeInMB to Blobsink and BlobFSsink
- Added support for custom setup without script properties for a SSIS integration runtime
- Added support for instance pools in Databricks linked service

## Version 4.1.2
### Feature Additions
Expand Down

0 comments on commit 4ac2114

Please sign in to comment.