@@ -13,10 +13,10 @@ This guide explains how to set up cluster federation that lets us control multip
13
13
14
14
## Prerequisites
15
15
16
- This guide assumes that we have a running Kubernetes cluster.
16
+ This guide assumes that you have a running Kubernetes cluster.
17
17
If not, then head over to the [ getting started guides] ( /docs/getting-started-guides/ ) to bring up a cluster.
18
18
19
- This guide also assumes that we have a Kubernetes release
19
+ This guide also assumes that you have a Kubernetes release
20
20
[ downloaded from here] ( /docs/getting-started-guides/binary_release/ ) ,
21
21
extracted into a directory and all the commands in this guide are run from
22
22
that directory.
34
34
35
35
Setting up federation requires running the federation control plane which
36
36
consists of etcd, federation-apiserver (via the hyperkube binary) and
37
- federation-controller-manager (also via the hyperkube binary). We can run
37
+ federation-controller-manager (also via the hyperkube binary). You can run
38
38
these binaries as pods on an existing Kubernetes cluster.
39
39
40
40
Note: This is a new mechanism to turn up Kubernetes Cluster Federation. If
@@ -61,7 +61,7 @@ Initialize the setup.
61
61
$ federation/deploy/deploy.sh init
62
62
```
63
63
64
- Optionally we can create/edit ` ${FEDERATION_OUTPUT_ROOT}/values.yaml ` to
64
+ Optionally, you can create/edit ` ${FEDERATION_OUTPUT_ROOT}/values.yaml ` to
65
65
customize any value in
66
66
[ federation/federation/manifests/federation/values.yaml] ( https://github.com/madhusudancs/kubernetes-anywhere/blob/federation/federation/manifests/federation/values.yaml ) . Example:
67
67
@@ -72,38 +72,38 @@ controllerManagerRegistry: "gcr.io/myrepository"
72
72
controllerManagerVersion : " v1.5.0-alpha.0.1010+892a6d7af59c0b"
73
73
` ` `
74
74
75
- Assuming we have built and pushed the ` hyperkube` image to the repository
75
+ Assuming you have built and pushed the ` hyperkube` image to the repository
76
76
with the given tag in the example above.
77
77
78
78
# ## Getting images
79
79
80
- To run the federation control plane components as pods, we first need the
81
- images for all the components. We can either use the official release
82
- images or we can build them ourselves from HEAD.
80
+ To run the federation control plane components as pods, you first need the
81
+ images for all the components. You can either use the official release
82
+ images or you can build them yourself from HEAD.
83
83
84
84
# ## Using official release images
85
85
86
86
As part of every Kubernetes release, official release images are pushed to
87
- ` gcr.io/google_containers` . To use the images in this repository, we can
88
- just set the container image fields in the following configs to point
89
- to the images in this repository. `gcr.io/google_containers/hyperkube`
90
- image includes the federation-apiserver and federation-controller-manager
91
- binaries, so we can point the corresponding configs for those components
87
+ ` gcr.io/google_containers` . To use the images in this repository, you can
88
+ set the container image fields in the following configs to point to the
89
+ images in this repository. `gcr.io/google_containers/hyperkube` image
90
+ includes the federation-apiserver and federation-controller-manager
91
+ binaries, so you can point the corresponding configs for those components
92
92
to the hyperkube image.
93
93
94
94
# ## Building and pushing images from HEAD
95
95
96
- To run the code from HEAD, we need to build and push our own images.
97
- Assuming that we have checked out the
98
- [Kuberntes repository](https://github.com/kubernetes/kubernetes) and
99
- running these commands from the root of the source directory, we can
100
- build the binaries using the following command :
96
+ To run the code from HEAD, you need to build and push your own images.
97
+ Check out the
98
+ [Kubernetes repository](https://github.com/kubernetes/kubernetes)
99
+ and run the following commands from the root of the source directory.
100
+ To build the binaries, run the following command :
101
101
102
102
` ` ` shell
103
103
$ federation/develop/develop.sh build_binaries
104
104
` ` `
105
105
106
- We can build the image and push it to the repository by running :
106
+ To build the image and push it to the repository run :
107
107
108
108
` ` ` shell
109
109
$ KUBE_REGISTRY="gcr.io/myrepository" federation/develop/develop.sh build_image
@@ -119,15 +119,15 @@ images from source.
119
119
120
120
# ## Running the federation control plane
121
121
122
- Once we have the images, we can turn up the federation control plane by
122
+ Once you have the images, you can turn up the federation control plane by
123
123
running :
124
124
125
125
` ` ` shell
126
126
$ federation/deploy/deploy.sh deploy_federation
127
127
` ` `
128
128
129
129
This spins up the federation control components as pods managed by
130
- [`Deployments`](http://kubernetes.io/docs/user-guide/deployments/) on our
130
+ [`Deployments`](http://kubernetes.io/docs/user-guide/deployments/) on your
131
131
existing Kubernetes cluster. It also starts a
132
132
[`type : LoadBalancer`](http://kubernetes.io/docs/user-guide/services/#type-loadbalancer)
133
133
[`Service`](http://kubernetes.io/docs/user-guide/services/) for the
@@ -137,7 +137,7 @@ by a dynamically provisioned
137
137
[`PV`](http://kubernetes.io/docs/user-guide/persistent-volumes/) for
138
138
` etcd` . All these components are created in the `federation` namespace.
139
139
140
- We can verify that the pods are available by running the following
140
+ You can verify that the pods are available by running the following
141
141
command :
142
142
143
143
` ` ` shell
@@ -147,8 +147,8 @@ federation-apiserver 1 1 1 1 1m
147
147
federation-controller-manager 1 1 1 1 1m
148
148
` ` `
149
149
150
- Running `deploy.sh` also creates a new record in our kubeconfig for us
151
- to be able to talk to federation apiserver. We can view this by running
150
+ Running `deploy.sh` also creates a new record in your kubeconfig for us
151
+ to be able to talk to federation apiserver. You can view this by running
152
152
` kubectl config view` .
153
153
154
154
Note : Dynamic provisioning for persistent volume currently works only on
@@ -157,19 +157,19 @@ your needs, if required.
157
157
158
158
# # Registering Kubernetes clusters with federation
159
159
160
- Now that we have the federation control plane up and running, we can start registering Kubernetes clusters.
160
+ Now that you have the federation control plane up and running, you can start registering Kubernetes clusters.
161
161
162
- First of all, we need to create a secret containing kubeconfig for that Kubernetes cluster, which federation control plane will use to talk to that Kubernetes cluster.
163
- For now, we create this secret in the host Kubernetes cluster (that hosts federation control plane). When we start supporting secrets in federation control plane, we will create this secret there.
164
- Suppose that our kubeconfig for Kubernetes cluster is at `/cluster1/kubeconfig`, we can run the following command to create the secret :
162
+ First of all, you need to create a secret containing kubeconfig for that Kubernetes cluster, which federation control plane will use to talk to that Kubernetes cluster.
163
+ For now, you can create this secret in the host Kubernetes cluster (that hosts federation control plane). When federation starts supporting secrets, you will be able to create this secret there.
164
+ Suppose that your kubeconfig for Kubernetes cluster is at `/cluster1/kubeconfig`, you can run the following command to create the secret :
165
165
166
166
` ` ` shell
167
167
$ kubectl create secret generic cluster1 --namespace=federation --from-file=/cluster1/kubeconfig
168
168
` ` `
169
169
170
170
Note that the file name should be `kubeconfig` since file name determines the name of the key in the secret.
171
171
172
- Now that the secret is created, we are ready to register the cluster. The YAML file for cluster will look like :
172
+ Now that the secret is created, you are ready to register the cluster. The YAML file for cluster will look like :
173
173
174
174
` ` ` yaml
175
175
apiVersion: federation/v1beta1
@@ -184,25 +184,25 @@ spec:
184
184
name: <secret-name>
185
185
` ` `
186
186
187
- We need to insert the appropriate values for `<client-cidr>`, `<apiserver-address>` and `<secret-name>`.
188
- ` <secret-name>` here is name of the secret that we just created.
187
+ You need to insert the appropriate values for `<client-cidr>`, `<apiserver-address>` and `<secret-name>`.
188
+ ` <secret-name>` here is name of the secret that you just created.
189
189
serverAddressByClientCIDRs contains the various server addresses that clients
190
- can use as per their CIDR. We can set the server's public IP address with CIDR
191
- ` "0.0.0.0/0"` which all clients will match. In addition, if we want internal
192
- clients to use server's clusterIP, we can set that as serverAddress. The client
190
+ can use as per their CIDR. You can set the server's public IP address with CIDR
191
+ ` "0.0.0.0/0"` which all clients will match. In addition, if you want internal
192
+ clients to use server's clusterIP, you can set that as serverAddress. The client
193
193
CIDR in that case will be a CIDR that only matches IPs of pods running in that
194
194
cluster.
195
195
196
- Assuming our YAML file is located at `/cluster1/cluster.yaml`, we can run the following command to register this cluster :
196
+ Assuming your YAML file is located at `/cluster1/cluster.yaml`, you can run the following command to register this cluster :
197
197
198
198
<!-- TODO(madhusudancs) : Make the kubeconfig context configurable with default set to `federation` -->
199
199
` ` ` shell
200
200
$ kubectl create -f /cluster1/cluster.yaml --context=federation-cluster
201
201
202
202
` ` `
203
203
204
- By specifying `--context=federation-cluster`, we direct the request to
205
- federation apiserver. We can ensure that the cluster registration was
204
+ By specifying `--context=federation-cluster`, you direct the request to
205
+ federation apiserver. You can ensure that the cluster registration was
206
206
successful by running :
207
207
208
208
` ` ` shell
@@ -213,16 +213,16 @@ cluster1 Ready 3m
213
213
214
214
# # Updating KubeDNS
215
215
216
- Once the cluster is registered with the federation, we are all ready to use it.
217
- But for the cluster to be able to route federation service requests, we need to restart
216
+ Once the cluster is registered with the federation, you are all set to use it.
217
+ But for the cluster to be able to route federation service requests, you need to restart
218
218
KubeDNS and pass it a `--federations` flag which tells it about valid federation DNS hostnames.
219
219
Format of the flag is like this :
220
220
221
221
` ` `
222
222
--federations=${FEDERATION_NAME}=${DNS_DOMAIN_NAME}
223
223
` ` `
224
224
225
- To update KubeDNS with federations flag, we can edit the existing kubedns replication controller to
225
+ To update KubeDNS with federations flag, you can edit the existing kubedns replication controller to
226
226
include that flag in pod template spec and then delete the existing pod. Replication controller will
227
227
recreate the pod with updated template.
228
228
@@ -253,7 +253,7 @@ And then delete it by running:
253
253
$ kubectl delete pods <pod-name> --namespace=kube-system
254
254
` ` `
255
255
256
- We are now all set to start using federation.
256
+ You are now all set to start using federation.
257
257
258
258
# # Turn down
259
259
@@ -274,29 +274,29 @@ why the new mechanism doesn't work for your case by filing an issue here -
274
274
275
275
# ## Getting images
276
276
277
- To run these as pods, we first need images for all the components. We can use
278
- official release images or we can build from HEAD.
277
+ To run these as pods, you first need images for all the components. You can use
278
+ official release images or you can build from HEAD.
279
279
280
280
# ### Using official release images
281
281
282
282
As part of every release, images are pushed to `gcr.io/google_containers`. To use
283
- these images, we set env var `FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers`
283
+ these images, set env var `FEDERATION_PUSH_REPO_BASE=gcr.io/google_containers`
284
284
This will always use the latest image.
285
285
To use the hyperkube image which includes federation-apiserver and
286
- federation-controller-manager from a specific release, we can set
287
- ` FEDERATION_IMAGE_TAG` .
286
+ federation-controller-manager from a specific release, set the
287
+ ` FEDERATION_IMAGE_TAG` environment variable .
288
288
289
289
# ### Building and pushing images from HEAD
290
290
291
- To run the code from HEAD, we need to build and push our own images.
292
- We can build the images using the following command :
291
+ To run the code from HEAD, you need to build and push your own images.
292
+ You can build the images using the following command :
293
293
294
294
` ` ` shell
295
295
$ FEDERATION=true KUBE_RELEASE_RUN_TESTS=n make quick-release
296
296
` ` `
297
297
298
- Next, we need to push these images to a registry such as Google Container Registry or Docker Hub, so that our cluster can pull them.
299
- If Kubernetes cluster is running on Google Compute Engine (GCE), then we can push the images to `gcr.io/<gce-project-name>`.
298
+ Next, you need to push these images to a registry such as Google Container Registry or Docker Hub, so that your cluster can pull them.
299
+ If Kubernetes cluster is running on Google Compute Engine (GCE), then you can push the images to `gcr.io/<gce-project-name>`.
300
300
The command to push the images will look like :
301
301
302
302
` ` ` shell
@@ -305,7 +305,7 @@ $ FEDERATION=true FEDERATION_PUSH_REPO_BASE=gcr.io/<gce-project-name> ./build/pu
305
305
306
306
# ## Running the federation control plane
307
307
308
- Once we have the images, we can run these as pods on our existing kubernetes cluster.
308
+ Once you have the images, you can run these as pods on your existing kubernetes cluster.
309
309
The command to run these pods on an existing GCE cluster will look like :
310
310
311
311
` ` ` shell
@@ -320,14 +320,14 @@ This is used to resolve DNS requests for federation services. The service
320
320
controller keeps DNS records with the provider updated as services/pods are
321
321
updated in underlying kubernetes clusters.
322
322
323
- ` FEDERATION_NAME` is a name we can choose for our federation. This is the name that will appear in DNS routes.
323
+ ` FEDERATION_NAME` is a name you can choose for your federation. This is the name that will appear in DNS routes.
324
324
325
- ` DNS_ZONE_NAME` is the domain to be used for DNS records. This is a domain that we
325
+ ` DNS_ZONE_NAME` is the domain to be used for DNS records. This is a domain that you
326
326
need to buy and then configure it such that DNS queries for that domain are
327
327
routed to the appropriate provider as per `FEDERATION_DNS_PROVIDER`.
328
328
329
329
Running that command creates a namespace `federation` and creates 2 deployments : ` federation-apiserver` and `federation-controller-manager`.
330
- We can verify that the pods are available by running the following command :
330
+ You can verify that the pods are available by running the following command :
331
331
332
332
` ` ` shell
333
333
$ kubectl get deployments --namespace=federation
@@ -336,8 +336,8 @@ federation-apiserver 1 1 1 1 1m
336
336
federation-controller-manager 1 1 1 1 1m
337
337
` ` `
338
338
339
- Running `federation-up.sh` also creates a new record in our kubeconfig for us
340
- to be able to talk to federation apiserver. We can view this by running
339
+ Running `federation-up.sh` also creates a new record in your kubeconfig for us
340
+ to be able to talk to federation apiserver. You can view this by running
341
341
` kubectl config view` .
342
342
343
343
Note : ` federation-up.sh` creates the federation-apiserver pod with an etcd
0 commit comments