Skip to content
This repository has been archived by the owner on Nov 16, 2023. It is now read-only.

Commit

Permalink
Merge branch 'master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
yradsmikham authored Apr 16, 2020
2 parents aa5d4ed + c5ecda6 commit 0146ecd
Show file tree
Hide file tree
Showing 29 changed files with 564 additions and 186 deletions.
13 changes: 7 additions & 6 deletions docs/commands/data.json

Large diffs are not rendered by default.

30 changes: 29 additions & 1 deletion guides/building-helm-charts-for-spk.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ cluster.
[top-level configuration](#top-level-configuration)

`serviceName`: This configuration is overriden by the values form the
[ring-level-configuration](#ring-level-configuration) file
[ring-level-configuration](#ring-level-configuration) file.

`image.repository`: This configuration is a special configuration, that can
_only_ configured by a pipeline variable, `ACR_NAME`. `ACR_NAME` must be
Expand Down Expand Up @@ -257,6 +257,34 @@ from [ring level configuration](#ring-level-configuration) originally generated
from the `hld-lifecycle` pipeline. The `selector` in this `service` targets a
Kubernetes `Deployment` that maintains the same label.

##### A note on Service Types

The Kubernetes Service scaffolded by the
[provided helm chart](./sample-helm-chart) _explicitly_ does not specify a
service type. By default, Kubernetes services are created as `ClusterIP`,
meaning that the Kubernetes Service is bound to a cluster _internal_ IP address,
preventing external users from accessing the service. While a user can choose to
utilize an opposing Kubernetes Service type, `LoadBalancer` within their Helm
Charts, it is inadvisible - as this binds an _external_ and _public_ IP address
to the Kubernetes Service, allowing external users to access the Kubernetes
Service. For more information on service types, and their routing implications,
refer to the Kubernetes Documentation
[here](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types).

To allow external traffic (ie ingress traffic) to be routed to Services hosted
on the cluster, `spk` utilizes the `traefik2` ingress controller and associated
`IngressRoute` rules to allow external traffic to be routed into your cluster
from a single endpoint. An Ingress Controller can be be configured to handle all
kinds of scenarios that you may want to handle when running services in
produciton, such as circuit breaking or traffic throttling - please refer to the
`traefik2`
[configuration introduction](https://docs.traefik.io/v2.0/getting-started/configuration-overview/)
for more details. Further, assuming a correctly configured helm chart with all
the [requsitite values](#mandatory-helm-chart-configuration), `spk` builds and
scaffolds an `IngressRoute` for a service and its associated rings
automatically. Refer to [static configuration](#static-configuration) for more
details.

#### Kubernetes Deployment

![Rendered Kubernetes Deployment](./images/spk-rendered-deployment.png)
Expand Down
43 changes: 43 additions & 0 deletions src/commands/deployment/create.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
## Description

This command inserts data about pipeline runs into Azure Table storage.

## Example

The following command has parameters for Azure Table storage credential and various pipelines run details. It's used by the source build pipeline, the release stage and the manifest generation pipeline, and each of them pass in parameters depending on the information for that pipeline. Here are three examples:

```
spk deployment create -n $AZURE_STORAGE_ACCOUNT_NAME \
-k $AZURE_ACCOUNT_KEY \
-t $AZURE_TABLE_NAME \
-p $AZURE_TABLE_PARTITION_KEY \
--p1 $(Build.BuildId) \
--image-tag $tag_name \
--commit-id $commitId \
--service $service \
--repository $repourl
```

```
spk deployment create -n $AZURE_STORAGE_ACCOUNT_NAME \
-k $AZURE_ACCOUNT_KEY \
-t $AZURE_TABLE_NAME \
-p $AZURE_TABLE_PARTITION_KEY \
--p2 $(Build.BuildId) \
--hld-commit-id $latest_commit \
--env $(Build.SourceBranchName) \
--image-tag $tag_name \
--pr $pr_id \
--repository $repourl
```

```
spk deployment create -n $AZURE_STORAGE_ACCOUNT_NAME \
-k $AZURE_ACCOUNT_KEY \
-t $AZURE_TABLE_NAME \
-p $AZURE_TABLE_PARTITION_KEY \
--p3 $(Build.BuildId) \
--hld-commit-id $commitId \
--pr $pr_id \
--repository $repourl
```
4 changes: 2 additions & 2 deletions src/commands/deployment/get.decorator.json
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@
"defaultValue": ""
},
{
"arg": "-e, --env <environment>",
"description": "Filter by environment name",
"arg": "-r, --ring <ring>",
"description": "Filter by ring name",
"defaultValue": ""
},
{
Expand Down
13 changes: 6 additions & 7 deletions src/commands/deployment/get.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ const MOCKED_INPUT_VALUES: CommandOptions = {
buildId: "",
commitId: "",
deploymentId: "",
env: "",
ring: "",
imageTag: "",
output: "",
service: "",
Expand All @@ -45,7 +45,7 @@ const MOCKED_VALUES: ValidatedOptions = {
buildId: "",
commitId: "",
deploymentId: "",
env: "",
ring: "",
imageTag: "",
nTop: 0,
output: "",
Expand Down Expand Up @@ -292,7 +292,6 @@ describe("Introspect deployments", () => {
const dep = deployment as IDeployment;

// Make sure the basic fields are defined
expect(dep.deploymentId).not.toBe("");
expect(dep.service).not.toBe("");
expect(duration(dep)).not.toBe("");
expect(status(dep)).not.toBe("");
Expand Down Expand Up @@ -321,7 +320,6 @@ describe("Print deployments", () => {
const deployment = [
"2019-08-30T21:05:19.047Z",
"hello-bedrock",
"7468ca0a24e1",
"c626394",
6046,
"hello-bedrock-master-6046",
Expand All @@ -338,13 +336,14 @@ describe("Print deployments", () => {
expect(table).toBeDefined();

if (table) {
const matchItems = table.filter((field) => field[2] === deployment[2]);
//Use date (index 0) as matching filter
const matchItems = table.filter((field) => field[0] === deployment[0]);
expect(matchItems).toHaveLength(1); // one matching row

(matchItems[0] as IDeployment[]).forEach((field, i) => {
expect(field).toEqual(deployment[i]);
});
expect(matchItems[0]).toHaveLength(14);
expect(matchItems[0]).toHaveLength(13);

table = printDeployments(
mockedDeps,
Expand Down Expand Up @@ -395,7 +394,7 @@ describe("Output formats", () => {
expect(table).toBeDefined();

if (table) {
table.forEach((field) => expect(field).toHaveLength(20));
table.forEach((field) => expect(field).toHaveLength(19));
}
});
});
10 changes: 4 additions & 6 deletions src/commands/deployment/get.ts
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ export interface InitObject {
export interface CommandOptions {
watch: boolean;
output: string;
env: string;
ring: string;
imageTag: string;
buildId: string;
commitId: string;
Expand Down Expand Up @@ -117,7 +117,7 @@ export const validateValues = (opts: CommandOptions): ValidatedOptions => {
buildId: opts.buildId,
commitId: opts.commitId,
deploymentId: opts.deploymentId,
env: opts.env,
ring: opts.ring,
imageTag: opts.imageTag,
nTop: top,
output: opts.output,
Expand Down Expand Up @@ -307,13 +307,12 @@ export const printDeployments = (
let header = [
"Start Time",
"Service",
"Deployment",
"Commit",
"Src to ACR",
"Image Tag",
"Result",
"ACR to HLD",
"Env",
"Ring",
"Hld Commit",
"Result",
];
Expand Down Expand Up @@ -373,7 +372,6 @@ export const printDeployments = (
: "-"
);
row.push(deployment.service !== "" ? deployment.service : "-");
row.push(deployment.deploymentId);
row.push(deployment.commitId !== "" ? deployment.commitId : "-");
row.push(
deployment.srcToDockerBuild ? deployment.srcToDockerBuild.id : "-"
Expand Down Expand Up @@ -515,7 +513,7 @@ export const getDeployments = async (
initObj.srcPipeline,
initObj.hldPipeline,
initObj.clusterPipeline,
values.env,
values.ring,
values.imageTag,
values.buildId,
values.commitId,
Expand Down
2 changes: 0 additions & 2 deletions src/commands/hld/append-variable-group.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@ When an HLD repository is first initialized with `spk hld init`, the top portion
of the `manifest-generation.yaml` looks like this:

```yaml
# GENERATED WITH SPK VERSION 0.5.8
trigger:
branches:
include:
Expand All @@ -28,7 +27,6 @@ this case `my-vg`, will add it under the `variables` section if it does not
already exist:

```yaml
# GENERATED WITH SPK VERSION 0.5.8
trigger:
branches:
include:
Expand Down
9 changes: 7 additions & 2 deletions src/commands/hld/pipeline.ts
Original file line number Diff line number Diff line change
Expand Up @@ -193,8 +193,13 @@ export const execute = async (
await installHldToManifestPipeline(opts);
await exitFn(0);
} catch (err) {
logError(buildError(errorStatusCode.CMD_EXE_ERR, "", err));
logger.error(err);
logError(
buildError(
errorStatusCode.CMD_EXE_ERR,
"hld-install-manifest-pipeline-cmd-failed",
err
)
);
await exitFn(1);
}
};
Expand Down
2 changes: 2 additions & 0 deletions src/commands/infra/generate.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@ It will do the following:
- Create a "generated" directory for Terrform deployments (alongside the
scaffolded project directory)
- Copy the appropriate Terraform templates to the "generated" directory
- Check the Terraform module source values and convert them into a generic git
url based on the `definition.yaml`'s `source`, `version` and `template` path.
- Create a `spk.tfvars` in the generated directory based on the variables
provided in `definition.yaml` files of the parent and leaf directories.

Expand Down
2 changes: 1 addition & 1 deletion src/commands/infra/generate.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ const mockSourceInfo = {
};

const modifedSourceModuleData = `"aks-gitops" {
source = "github.com/microsoft/bedrock.git?ref=v0.0.1//cluster/azure/aks-gitops/"
source = "git::https://github.com/microsoft/bedrock.git//cluster/azure/aks-gitops/?ref=v0.0.1"
acr_enabled = var.acr_enabled
agent_vm_count = var.agent_vm_count
};
Expand Down
13 changes: 9 additions & 4 deletions src/commands/infra/generate.ts
Original file line number Diff line number Diff line change
Expand Up @@ -551,10 +551,15 @@ export const moduleSourceModify = async (
splitLine[3].replace(/["']/g, "")
)
);
// Concatenate the Git URL with munged data
const gitSource = fileSource.source
.replace(/(^\w+:|^)\/\//g, "")
.concat("?ref=", fileSource.version, "//", repoModulePath);
// Concatenate the Git URL with munged data using a generic git repository format
const gitSource =
"git::" +
fileSource.source.concat(
"//",
repoModulePath,
"?ref=",
fileSource.version
);
// Replace the line
line = line.replace(moduleSource, gitSource);
}
Expand Down
6 changes: 5 additions & 1 deletion src/commands/project/pipeline.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ import {
installLifecyclePipeline,
CommandOptions,
} from "./pipeline";
import { getErrorMessage } from "../../lib/errorBuilder";
import { deepClone } from "../../lib/util";

beforeAll(() => {
Expand Down Expand Up @@ -233,7 +234,10 @@ describe("installLifecyclePipeline and execute tests", () => {
expect(e).toBeDefined();
const builtDefnString = JSON.stringify({ fakeProperty: "temp" });
expect(e.message).toBe(
`project-pipeline-err-invalid-build-definition: Invalid BuildDefinition created, parameter 'id' is missing from ${builtDefnString}`
getErrorMessage({
errorKey: "project-pipeline-err-invalid-build-definition",
values: [builtDefnString],
})
);
}
});
Expand Down
Loading

0 comments on commit 0146ecd

Please sign in to comment.