Skip to content
This repository has been archived by the owner on Oct 12, 2023. It is now read-only.

Sample - Add a sample for using SAS resource files #253

Merged
merged 3 commits into from
Apr 23, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions samples/sas_resource_files/1989.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Name,Age
Julie,16
John,19
3 changes: 3 additions & 0 deletions samples/sas_resource_files/1990.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
Name,Age
Julie,17
John,20
11 changes: 11 additions & 0 deletions samples/sas_resource_files/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# SAS Resource Files

The following sample shows how to transfer data using secure [SAS blob tokens](https://docs.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1). This allows secure transfer between cloud storage from either your local computer or the nodes in the cluster.

As part of this example you will see how to create a secure write-only SAS and upload files to the cloud. Then create a secure read-only SAS and download those files to the nodes in your cluster. Finally, you will enumerate the files on each node in the cluster and can operate against them however you choose.

Make sure to replace the storage account you want to use. The the storage account listed in the credentials.json file must be used for this sample to work.

```R
storageAccountName <- "<YOUR_STORAGE_ACCOUNT>"
```
22 changes: 22 additions & 0 deletions samples/sas_resource_files/sas_resource_files_cluster.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
{
"name": "sas_resource_files",
"vmSize": "Standard_D11_v2",
"maxTasksPerNode": 1,
"poolSize": {
"dedicatedNodes": {
"min": 0,
"max": 0
},
"lowPriorityNodes": {
"min": 3,
"max": 3
},
"autoscaleFormula": "QUEUE"
},
"rPackages": {
"cran": [],
"github": [],
"bioconductor": []
},
"commandLine": []
}
65 changes: 65 additions & 0 deletions samples/sas_resource_files/sas_resources_files_example.R
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
library(doAzureParallel)

doAzureParallel::setCredentials("credentials.json")
storageAccountName <- "<YOUR_STORAGE_ACCOUNT>"
inputContainerName <- "datasets"

# Generate a sas tokens with the createSasToken function

# Write-only SAS. Will be used for uploading files to storage.
writeSasToken <- rAzureBatch::createSasToken(permission = "w", sr = "c", path = inputContainerName)

# Read-only SAS. Will be used for downloading files from storage.
readSasToken <- rAzureBatch::createSasToken(permission = "r", sr = "c", path = inputContainerName)

# Create a Storage container in the Azure Storage account
rAzureBatch::createContainer(inputContainerName)

# Upload blobs with a write sasToken
rAzureBatch::uploadBlob(inputContainerName,
fileDirectory = "1989.csv",
sasToken = writeSasToken,
accountName = storageAccountName)

rAzureBatch::uploadBlob(inputContainerName,
fileDirectory = "1990.csv",
sasToken = writeSasToken,
accountName = storageAccountName)

# Create URL paths with read-only permissions
csvFileUrl1 <- rAzureBatch::createBlobUrl(storageAccount = storageAccountName,
containerName = inputContainerName,
sasToken = readSasToken,
fileName = "1989.csv")


csvFileUrl2 <- rAzureBatch::createBlobUrl(storageAccount = storageAccountName,
containerName = inputContainerName,
sasToken = readSasToken,
fileName = "1990.csv")

# Create a list of files to download to the cluster using read-only permissions
# Place the files in a directory called 'data'
resource_files = list(
rAzureBatch::createResourceFile(url = csvFileUrl1, fileName = "data/1989.csv"),
rAzureBatch::createResourceFile(url = csvFileUrl2, fileName = "data/1990.csv")
)

# Create the cluster
cluster <- makeCluster("sas_resource_files_cluster.json", resourceFiles = resource_files)
registerDoAzureParallel(cluster)
workers <- getDoParWorkers()

# Files downloaded to the cluster are placed in a specific directory on each node called 'wd'
# Use the pre-defined environment variable 'AZ_BATCH_NODE_STARTUP_DIR' to find the path to the directory
listFiles <- foreach(i = 1:workers, .combine='rbind') %dopar% {
fileDirectory <- paste0(Sys.getenv("AZ_BATCH_NODE_STARTUP_DIR"), "/wd", "/data")
files <- list.files(fileDirectory)
df = data.frame("node" = i, "files" = files)
return(df)
}

# List the files downloaded to each node in the cluster
listFiles

stopCluster(cluster)