kubernetes (k8s), Main

k8s – Concepts & Components (from kubernetes.io)

Master Components

Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).

Master components can be run on any machine in the cluster. However, for simplicity, set up scripts typically start all master components on the same machine, and do not run user containers on this machine. See Building High-Availability Clusters for an example multi-master-VM setup.

kube-apiserver

Component on the master that exposes the Kubernetes API. It is the front-end for the Kubernetes control plane.

It is designed to scale horizontally – that is, it scales by deploying more instances. See Building High-Availability Clusters.

etcd

Consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.

Always have a backup plan for etcd’s data for your Kubernetes cluster. For in-depth information on etcd, see etcd documentation.

kube-scheduler

Component on the master that watches newly created pods that have no node assigned, and selects a node for them to run on.

Factors taken into account for scheduling decisions include individual and collective resource requirements, hardware/software/policy constraints, affinity and anti-affinity specifications, data locality, inter-workload interference and deadlines.

kube-controller-manager

Component on the master that runs controllers.

Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process.

These controllers include:

  • Node Controller: Responsible for noticing and responding when nodes go down.
  • Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).
  • Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces.

cloud-controller-manager

cloud-controller-manager runs controllers that interact with the underlying cloud providers. The cloud-controller-manager binary is an alpha feature introduced in Kubernetes release 1.6.

cloud-controller-manager runs cloud-provider-specific controller loops only. You must disable these controller loops in the kube-controller-manager. You can disable the controller loops by setting the --cloud-provider flag to external when starting the kube-controller-manager.

cloud-controller-manager allows cloud vendors code and the Kubernetes core to evolve independent of each other. In prior releases, the core Kubernetes code was dependent upon cloud-provider-specific code for functionality. In future releases, code specific to cloud vendors should be maintained by the cloud vendor themselves, and linked to cloud-controller-manager while running Kubernetes.

The following controllers have cloud provider dependencies:

  • Node Controller: For checking the cloud provider to determine if a node has been deleted in the cloud after it stops responding
  • Route Controller: For setting up routes in the underlying cloud infrastructure
  • Service Controller: For creating, updating and deleting cloud provider load balancers
  • Volume Controller: For creating, attaching, and mounting volumes, and interacting with the cloud provider to orchestrate volumes

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

kubelet

An agent that runs on each node in the cluster. It makes sure that containers are running in a pod.

The kubelet takes a set of PodSpecs that are provided through various mechanisms and ensures that the containers described in those PodSpecs are running and healthy. The kubelet doesn’t manage containers which were not created by Kubernetes.

kube-proxy

kube-proxy enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.

Container Runtime

The container runtime is the software that is responsible for running containers. Kubernetes supports two runtimes: Docker and rkt.

Addons

Addons are pods and services that implement cluster features. The pods may be managed by Deployments, ReplicationControllers, and so on. Namespaced addon objects are created in the kube-system namespace.

Selected addons are described below, for an extended list of available addons please see Addons.

DNS

While the other addons are not strictly required, all Kubernetes clusters should have cluster DNS, as many examples rely on it.

Cluster DNS is a DNS server, in addition to the other DNS server(s) in your environment, which serves DNS records for Kubernetes services.

Containers started by Kubernetes automatically include this DNS server in their DNS searches.

Web UI (Dashboard)

Dashboard is a general purpose, web-based UI for Kubernetes clusters. It allows users to manage and troubleshoot applications running in the cluster, as well as the cluster itself.

Container Resource Monitoring

Container Resource Monitoring records generic time-series metrics about containers in a central database, and provides a UI for browsing that data.

Cluster-level Logging

Cluster-level logging mechanism is responsible for saving container logs to a central log store with search/browsing interface.

Advertisements
kubernetes (k8s), Main

K8s – Installation & Configuration

Hello Guys,

 

i know it is quite very difficult to install kubernetes in a proxy prone environment.

Therefore i decided to take the pain and install kubernetes in my proxy prone environment.

I Would Like to share my Steps

For Both Master and Worker Node :- 

vi .bashrc

# Set Proxyfunction setproxy()

{

export {http,https,ftp}_proxy=”http://<proxy_ip&gt;:<port>”

export no_proxy=”localhost,10.96.0.0/12,*.<company_domain_Name>,<internel_ip>”

}
# Unset Proxyfunction unsetproxy()

{

unset {http,https,ftp}_proxy}
function checkproxy()

{

env |grep proxy

}

vi /etc/yum.conf

proxy=http://<proxy_ip>:<port>

proxy=https://<proxy_ip>:<port>

vi /etc/hosts

<ip1-master>  kubernetes-1

<ip2-worker>  kubernetes-2

<ip3-worker>  kubernetes-3

 

mkdir -p /etc/systemd/system/docker.service.d/

 

vi /etc/systemd/system/docker.service.d/http-proxy.conf

 

[Service]

Environment=HTTP_PROXY=http://<proxy_ip>:<port>/

Environment=HTTPS_PROXY=https://<proxy_ip>:<port>/

Environment=NO_PROXY=<ip1-master>,<ip2-worker>,<ip3-worker>
cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetesbaseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

EOF

 

setenforce 0

 

yum install -y kubelet kubeadm kubectl

systemctl enable kubelet && systemctl start kubelet

 

sed -i “s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g” /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

 

systemctl daemon-reload

systemctl restart kubelet

 

export no_proxy=”localhost,10.96.0.0/12,*.<company domain>,

<ip1-master>,<ip2-worker>,<ip3-worker>”

 

export KUBECONFIG=/etc/kubernetes/admin.conf

 

calico recommended for amd64, Flannel is better but needs CIDR to be 10.244.0.0/24

kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml

 

Master Node :-

kubeadm init

 

Worker Node :-

kubeadm join –token <token received from master node><master ip>:6443 –discovery-token-ca-cert-hash
sha256:<master-hash>

Master Node :-

Check in the master

kubectl get nodes

output-kuber