Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving documentation #170

Merged
merged 4 commits into from
Jun 11, 2018
Merged

Improving documentation #170

merged 4 commits into from
Jun 11, 2018

Conversation

ewoutp
Copy link
Contributor

@ewoutp ewoutp commented Jun 8, 2018

This PR contains several improvements, fixes & extensions of the manual.

@ewoutp ewoutp self-assigned this Jun 8, 2018
Copy link
Member

@neunhoef neunhoef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only typos. See comments. LGTM.

The exception to this is cursor related requests made to an ArangoDB `Cluster` deployment.
The coordinator that handles an initial query request (that results in a `Cursor`)
will save some in-memory state in that coordinator, if the result of the query
is to big to be transfer back in the response of the initial request.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"to big" -> "too big"

Kubernetes cluster and synchronize your endpoints before making the
initial query request.
This will result in the use (by the driver) of internal DNS names of all coordinators.
A follow-up request can then be send to exaclty the same coordinator.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"send" -> "sent"
"exaclty" -> "exactly"

@@ -10,13 +10,45 @@ In the `ArangoDeployment` resource, one can specify the type of storage
used by groups of servers using the `spec.<group>.storageClassName`
setting.

This is an example of a `Cluster` deployment that stores its agent & dbserver
data on `PersistentVolumes` that using the `my-local-ssd` `StorageClass`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"using" -> "use"

in Kubernetes.
These logs are accessible through the `Pods` that group these containers.

The fetch the logs of the default container running in a `Pod`, run:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The" -> "To"


```bash
kubectl logs <pod-name> -n <namespace>
# or with follow option to keep inspecting logs while their are written
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"their" -> "they"


There are two common causes for this.

1) The `Pods` cannot be scheduled because there are no enough nodes available.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"no" -> "not"


- If an `Agent` was using the volume, it can be repaired as long as 2 other agents are still healthy.
- If a `DBServer` was using the volume, and the replication factor of all database
collections is 2 of higher, and the remaining dbservers are still healthy,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"of" -> "or"

@ewoutp ewoutp removed the request for review from matthewvon June 11, 2018 08:49
@ewoutp ewoutp merged commit e34089a into master Jun 11, 2018
@ewoutp ewoutp deleted the documentation/update branch June 11, 2018 08:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants