Skip to content

Commit c74101d

Browse files
authored
Merge pull request #4734 from marcduiker/upgrade-hugo-docsy
Upgrade hugo docsy for 1.16
2 parents ccd84a0 + 5e7ced6 commit c74101d

File tree

25 files changed

+663
-30
lines changed

25 files changed

+663
-30
lines changed

.github/workflows/website-v1-15.yml renamed to .github/workflows/website-v1-16.yml

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
1-
name: Azure Static Web App v1.15
1+
name: Azure Static Web App v1.16
22

33
on:
44
workflow_dispatch:
55
push:
66
branches:
7-
- v1.15
7+
- v1.16
88
pull_request:
99
types: [opened, synchronize, reopened, closed]
1010
branches:
11-
- v1.15
11+
- v1.16
1212

1313
jobs:
1414
build_and_deploy_job:
@@ -47,7 +47,7 @@ jobs:
4747
HUGO_ENV: production
4848
HUGO_VERSION: "0.147.9"
4949
with:
50-
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
50+
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_16 }}
5151
repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments)
5252
skip_deploy_on_missing_secrets: true
5353
action: "upload"
@@ -66,6 +66,6 @@ jobs:
6666
id: closepullrequest
6767
uses: Azure/static-web-apps-deploy@v1
6868
with:
69-
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_15 }}
69+
azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_V1_16 }}
7070
skip_deploy_on_missing_secrets: true
7171
action: "close"

daprdocs/content/en/developing-applications/building-blocks/bindings/howto-bindings.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -112,6 +112,8 @@ The code examples below leverage Dapr SDKs to invoke the output bindings endpoin
112112

113113
Here's an example of using a console app with top-level statements in .NET 6+:
114114

115+
Here's an example of using a console app with top-level statements in .NET 6+:
116+
115117
```csharp
116118
using System.Text;
117119
using System.Threading.Tasks;

daprdocs/content/en/developing-applications/building-blocks/bindings/howto-triggers.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -121,6 +121,8 @@ Below are code examples that leverage Dapr SDKs to demonstrate an input binding.
121121

122122
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
123123

124+
The following example demonstrates how to configure an input binding using ASP.NET Core controllers.
125+
124126
```csharp
125127
using System.Collections.Generic;
126128
using System.Threading.Tasks;

daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-features-concepts.md

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -15,20 +15,19 @@ into the features and concepts included with Dapr Jobs and the various SDKs. Dap
1515

1616
## Job identity
1717

18-
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
19-
interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
18+
All jobs are registered with a case-sensitive job name. These names are intended to be unique across all services
19+
interfacing with the Dapr runtime. The name is used as an identifier when creating and modifying the job as well as
2020
to indicate which job a triggered invocation is associated with.
2121

22-
Only one job can be associated with a name at any given time. Any attempt to create a new job using the same name
23-
as an existing job will result in an overwrite of this existing job.
22+
Only one job can be associated with a name at any given time. By default, any attempt to create a new job using the same name as an existing job results in an error. However, if the `overwrite` flag is set to `true`, the new job overwrites the existing job with the same name.
2423

2524
## Scheduling Jobs
2625
A job can be scheduled using any of the following mechanisms:
2726
- Intervals using Cron expressions, duration values, or period expressions
2827
- Specific dates and times
2928

30-
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
31-
time zone is used. When not provided, the time zone used by the server running Dapr is used.
29+
For all time-based schedules, if a timestamp is provided with a time zone via the RFC3339 specification, that
30+
time zone is used. When not provided, the time zone used by the server running Dapr is used.
3231
In other words, do **not** assume that times run in UTC time zone, unless otherwise specified when scheduling
3332
the job.
3433

@@ -48,7 +47,7 @@ fields spanning the values specified in the table below:
4847

4948
### Schedule using a duration value
5049
You can schedule jobs using [a Go duration string](https://pkg.go.dev/time#ParseDuration), in which
51-
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
50+
a string consists of a (possibly) signed sequence of decimal numbers, each with an optional fraction and a unit suffix.
5251
Valid time units are `"ns"`, `"us"`, `"ms"`, `"s"`, `"m"`, or `"h"`.
5352

5453
#### Example 1
@@ -70,7 +69,7 @@ The following period expressions are supported. The "@every" expression also acc
7069
| @hourly | Run once an hour at the beginning of the hour | 0 0 * * * * |
7170

7271
### Schedule using a specific date/time
73-
A job can also be scheduled to run at a particular point in time by providing a date using the
72+
A job can also be scheduled to run at a particular point in time by providing a date using the
7473
[RFC3339 specification](https://www.rfc-editor.org/rfc/rfc3339).
7574

7675
#### Example 1
@@ -107,15 +106,17 @@ In this setup, you have full control over how triggered jobs are received and pr
107106
through this gRPC method.
108107

109108
### HTTP
110-
If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
109+
If a gRPC server isn't registered with Dapr when the application starts up, Dapr instead triggers jobs by making a
111110
POST request to the endpoint `/job/<job-name>`. The body includes the following information about the job:
112111
- `Schedule`: When the job triggers occur
113112
- `RepeatCount`: An optional value indicating how often the job should repeat
114113
- `DueTime`: An optional point in time representing either the one time when the job should execute (if not recurring)
115114
or the not-before time from which the schedule should take effect
116115
- `Ttl`: An optional value indicating when the job should expire
117116
- `Payload`: A collection of bytes containing data originally stored when the job was scheduled
117+
- `Overwrite`: A flag to allow the requested job to overwrite an existing job with the same name, if it already exists.
118+
- `FailurePolicy`: An optional failure policy for the job.
118119

119120
The `DueTime` and `Ttl` fields will reflect an RC3339 timestamp value reflective of the time zone provided when the job was
120121
originally scheduled. If no time zone was provided, these values indicate the time zone used by the server running
121-
Dapr.
122+
Dapr.

daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -53,11 +53,11 @@ Dapr's jobs API ensures the tasks represented in these scenarios are performed c
5353

5454
## Features
5555

56-
The jobs API provides several features to make it easy for you to schedule jobs.
56+
The main functionality of the Jobs API allows you to create, retrieve, and delete scheduled jobs. By default, when you create a job with a name that already exists, the operation fails unless you explicitly set the `overwrite` flag to `true`. This ensures that existing jobs are not accidentally modified or overwritten.
5757

5858
### Schedule jobs across multiple replicas
5959

60-
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
60+
When you create a job, it does not replace an existing job with the same name, unless you explicitly set the `overwrite` flag. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
6161

6262
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
6363

daprdocs/content/en/developing-applications/building-blocks/workflow/workflow-patterns.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -624,6 +624,29 @@ await context.CallActivityAsync("PostResults", sum);
624624
625625
{{< /tabpane >}}
626626
627+
With the release of 1.16, it's even easier to process workflow activities in parallel while putting an upper cap on
628+
concurrency by using the following extension methods on the `WorkflowContext`:
629+
630+
{{% tabpane %}}
631+
632+
{{% tab header=".NET" %}}
633+
<!-- .NET -->
634+
```csharp
635+
//Revisiting the earlier example...
636+
// Get a list of work items to process
637+
var workBatch = await context.CallActivityAsync<object[]>("GetWorkBatch", null);
638+
639+
// Process deterministically in parallel with an upper cap of 5 activities at a time
640+
var results = await context.ProcessInParallelAsync(workBatch, workItem => context.CallActivityAsync<int>("ProcessWorkItem", workItem), maxConcurrency: 5);
641+
642+
var sum = results.Sum(t => t);
643+
await context.CallActivityAsync("PostResults", sum);
644+
```
645+
646+
{{% /tab %}}
647+
648+
{{% /tabpane %}}
649+
627650
Limiting the degree of concurrency in this way can be useful for limiting contention against shared resources. For example, if the activities need to call into external resources that have their own concurrency limits, like a databases or external APIs, it can be useful to ensure that no more than a specified number of activities call that resource concurrently.
628651
629652
## Async HTTP APIs

daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,14 @@ kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-
7777
Persistent Volume Claims are not deleted automatically with an [uninstall]({{% ref dapr-uninstall.md %}}). This is a deliberate safety measure to prevent accidental data loss.
7878
{{% /alert %}}
7979

80+
{{% alert title="Note" color="primary" %}}
81+
For storage providers that do NOT support dynamic volume expansion: If Dapr has ever been installed on the cluster before, the Scheduler's Persistent Volume Claims must be manually uninstalled in order for new ones with increased storage size to be created.
82+
```bash
83+
kubectl delete pvc -n dapr-system dapr-scheduler-data-dir-dapr-scheduler-server-0 dapr-scheduler-data-dir-dapr-scheduler-server-1 dapr-scheduler-data-dir-dapr-scheduler-server-2
84+
```
85+
Persistent Volume Claims are not deleted automatically with an [uninstall]({{< ref dapr-uninstall.md >}}). This is a deliberate safety measure to prevent accidental data loss.
86+
{{% /alert %}}
87+
8088
#### Increase existing Scheduler Storage Size
8189

8290
{{% alert title="Warning" color="warning" %}}

daprdocs/content/en/reference/api/jobs_api.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,8 @@ Parameter | Description
3737
`dueTime` | An optional time at which the job should be active, or the "one shot" time, if other scheduling type fields are not provided. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from creation time), or non-repeating ISO8601.
3838
`repeats` | An optional number of times in which the job should be triggered. If not set, the job runs indefinitely or until expiration.
3939
`ttl` | An optional time to live or expiration of the job. Accepts a "point in time" string in the format of RFC3339, Go duration string (calculated from job creation time), or non-repeating ISO8601.
40+
`overwrite` | A boolean value to specify if the job can overwrite an existing one with the same name. Default value is `false`
41+
`failure_policy` | An optional failure policy for the job. Details of the format are below. If not set, the job is retried up to 3 times with a delay of 1 second between retries.
4042

4143
#### schedule
4244
`schedule` accepts both systemd timer-style cron expressions, as well as human readable '@' prefixed period strings, as defined below.
@@ -62,6 +64,39 @@ Entry | Description | Equivalent
6264
@daily (or @midnight) | Run once a day, midnight | 0 0 0 * * *
6365
@hourly | Run once an hour, beginning of hour | 0 0 * * * *
6466

67+
#### failure_policy
68+
69+
`failure_policy` specifies how the job should handle failures.
70+
71+
It can be set to `constant` or `drop`.
72+
- The `constant` policy retries the job constantly with the following configuration options.
73+
- `max_retries` configures how many times the job should be retried. Defaults to retrying indefinitely. `nil` denotes unlimited retries, while `0` means the request will not be retried.
74+
- `interval` configures the delay between retries. Defaults to retrying immediately. Valid values are of the form `200ms`, `15s`, `2m`, etc.
75+
- The `drop` policy drops the job after the first failure, without retrying.
76+
77+
##### Example 1
78+
79+
```json
80+
{
81+
//...
82+
"failure_policy": {
83+
"constant": {
84+
"max_retries": 3,
85+
"interval": "10s"
86+
}
87+
}
88+
}
89+
```
90+
##### Example 2
91+
92+
```json
93+
{
94+
//...
95+
"failure_policy": {
96+
"drop": {}
97+
}
98+
}
99+
```
65100

66101
### Request body
67102

daprdocs/content/en/reference/components-reference/supported-bindings/gcpbucket.md

Lines changed: 124 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -82,8 +82,13 @@ This component supports **output binding** with the following operations:
8282

8383
- `create` : [Create file](#create-file)
8484
- `get` : [Get file](#get-file)
85+
- `bulkGet` : [Bulk get objects](#bulk-get-objects)
8586
- `delete` : [Delete file](#delete-file)
8687
- `list`: [List file](#list-files)
88+
- `copy`: [Copy file](#copy-files)
89+
- `move`: [Move file](#move-files)
90+
- `rename`: [Rename file](#rename-files)
91+
8792

8893
### Create file
8994

@@ -216,6 +221,72 @@ The metadata parameters are:
216221

217222
The response body contains the value stored in the object.
218223

224+
### Bulk get objects
225+
226+
To perform a bulk get operation that retrieves all bucket files at once, invoke the GCP bucket binding with a `POST` method and the following JSON body:
227+
228+
```json
229+
{
230+
"operation": "bulkGet",
231+
}
232+
```
233+
234+
The metadata parameters are:
235+
236+
- `encodeBase64` - (optional) configuration to encode base64 file content before return the content for all files
237+
238+
#### Example
239+
240+
{{% tabpane text=true %}}
241+
242+
{{% tab header="Windows" %}}
243+
```bash
244+
curl -d '{ \"operation\": \"bulkget\"}' http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
245+
```
246+
{{% /tab %}}
247+
248+
{{% tab header="Linux" %}}
249+
```bash
250+
curl -d '{ "operation": "bulkget"}' \
251+
http://localhost:<dapr-port>/v1.0/bindings/<binding-name>
252+
```
253+
{{% /tab %}}
254+
255+
{{% /tabpane %}}
256+
257+
#### Response
258+
259+
The response body contains an array of objects, where each object represents a file in the bucket with the following structure:
260+
261+
```json
262+
[
263+
{
264+
"name": "file1.txt",
265+
"data": "content of file1",
266+
"attrs": {
267+
"bucket": "mybucket",
268+
"name": "file1.txt",
269+
"size": 1234,
270+
...
271+
}
272+
},
273+
{
274+
"name": "file2.txt",
275+
"data": "content of file2",
276+
"attrs": {
277+
"bucket": "mybucket",
278+
"name": "file2.txt",
279+
"size": 5678,
280+
...
281+
}
282+
}
283+
]
284+
```
285+
286+
Each object in the array contains:
287+
- `name`: The name of the file
288+
- `data`: The content of the file
289+
- `attrs`: Object attributes from GCP Storage including metadata like creation time, size, content type, etc.
219290

220291
### Delete object
221292

@@ -262,7 +333,7 @@ An HTTP 204 (No Content) and empty body will be retuned if successful.
262333

263334
### List objects
264335

265-
To perform a list object operation, invoke the S3 binding with a `POST` method and the following JSON body:
336+
To perform a list object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
266337

267338
```json
268339
{
@@ -321,6 +392,58 @@ The list of objects will be returned as JSON array in the following form:
321392
}
322393
]
323394
```
395+
396+
### Copy objects
397+
398+
To perform a copy object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
399+
400+
```json
401+
{
402+
"operation": "copy",
403+
"metadata": {
404+
"destinationBucket": "destination-bucket-name",
405+
}
406+
}
407+
```
408+
409+
The metadata parameters are:
410+
411+
- `destinationBucket` - the name of the destination bucket (required)
412+
413+
### Move objects
414+
415+
To perform a move object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
416+
417+
```json
418+
{
419+
"operation": "move",
420+
"metadata": {
421+
"destinationBucket": "destination-bucket-name",
422+
}
423+
}
424+
```
425+
426+
The metadata parameters are:
427+
428+
- `destinationBucket` - the name of the destination bucket (required)
429+
430+
### Rename objects
431+
432+
To perform a rename object operation, invoke the GCP bucket binding with a `POST` method and the following JSON body:
433+
434+
```json
435+
{
436+
"operation": "rename",
437+
"metadata": {
438+
"newName": "object-new-name",
439+
}
440+
}
441+
```
442+
443+
The metadata parameters are:
444+
445+
- `newName` - the new name of the object (required)
446+
324447
## Related links
325448

326449
- [Basic schema for a Dapr component]({{% ref component-schema %}})

0 commit comments

Comments
 (0)