Managing Kubernetes Applications with HashiCorp Terraform

HashiCorp Terraform is an open source tool that enables users to provision any infrastructure using a consistent workflow. While Terraform can manage infrastructure for both public and private cloud services, it can also manage external services like GitHubNomad, or Kubernetes pods. This post highlights the new Terraform Kubernetes provider which enables operators to manage the lifecycle of Kubernetes resources using declarative infrastructure as code.

Terraform enables provisioning infrastructure and infrastructure resources through an extensible ecosystem of providers (plugins). In addition to explaining the benefits of using Terraform to manage Kubernetes resources versus the Kubernetes CLI, this post also walks through using new Kubernetes provider to interact with Kubernetes resources (pods, replication controllers, and services) and enabling operators to control the lifecycle of Kubernetes resources using infrastructure as code.

Why Terraform?

Q: Why would I use Terraform to manage Kubernetes resources as infrastructure as code?

Terraform uses the same declarative syntax to provision the lower underlying infrastructure (compute, networking, and storage) and scheduling (application) layer. Using graph theory, Terraform models the relationships between all dependencies in your infrastructure automatically. This same graph enables Terraform to automatically detect drift as resources (like compute instances or Kubernetes pods) change over time. This drift is presented to the user for confirmation as part of the Terraform dry-run planning phase.

Terraform provides full lifecycle management of Kubernetes resources including creation and deletion of pods, replication controllers, and services.

Because Terraform understands the relationships between resources, it has an inherent understanding of the order of operations and failure conditions for creating, updating, and deleting resources. For example, if a persistent volume claim (PVC) requires space from a particular persistent volume (PV), Terraform automatically knows to create the PV before the PVC. If the PV fails to create, Terraform will not attempt to create the PVC, since Terraform knows the creation will fail.

Unlike the kubectl CLI, Terraform will wait for services to become ready before creating dependent resources. This is useful when you want to guarantee state following the command's completion. As a concrete example of this behavior, Terraform will wait until a service is provisioned so it can add the service's IP to a load balancer. No manual processes necessary!

Getting started with the Kubernetes provider

This post assumes you already have a Kubernetes cluster up and running, and that the cluster is accessible from the place where Terraform is running. Terraform can provision a Kubernetes cluster, but that is not specifically discussed in this post. The easiest way to configure the Kubernetes provider is to create a configuration file at ~/.kube/config – Terraform will automatically load that configuration during its run:

# main.tf
provider "kubernetes" {}</code></pre>

When it is not feasible to create a configuration file, you can configure the provider directly in the configuration or via environment variables. This is useful in CI systems or ephemeral environments that change frequently.

After specifying the provider, initialize Terraform. This will download and install the latest version of the Terraform Kubernetes provider.

$ terraform init

Initializing provider plugins...
- Downloading plugin for provider "kubernetes"...

Terraform has been successfully initialized!

Scheduling a Simple Application

At the core of a Kubernetes application is the pod. A pod consists of one or more containers which are scheduled on cluster nodes based on CPU or memory being available.

Next, use Terraform to create a pod with a single container running nginx, exposing port 80 to the user through the load balancer. By adding labels, Kubernetes can discover all pods (instances) to route traffic to the exposed port automatically.

resource "kubernetes_pod" "echo" {
  metadata {
    name = "echo-example"
    labels {
      App = "echo"
  } }
  spec {
    container {
      image = "hashicorp/http-echo:0.2.1"
      name  = "example2"
      args = ["-listen=:80", "-text='Hello World'"]
      port {
        container_port = 80
} } } }

The above is an example and does not represent best practices. In production scenarios, you would run more than one instance of your application for high availability.

To expose the pod to end users, provision a service. A service is capable of provisioning a load-balancer in some cloud providers. It can manage the relationship between pods and the load balancer while new pods are launched.

resource "kubernetes_service" "echo" {
  metadata {
    name = "echo-example"
  }
  spec {
    selector {
      App = "${kubernetes_pod.echo.metadata.0.labels.App}"
    }
    port {
      port        = 80
      target_port = 80
    }
    type = "LoadBalancer"
} }

output "lb_ip" {
  value = "${kubernetes_service.echo.load_balancer_ingress.0.ip}"
}

In addition to specifying service, this Terraform configuration also specifies an output. This output is displayed at the end of the Terraform apply and will print the IP of the load balancer making it easily accessible for either operator (human) or any tools/scripts that need it.

The plan provides an overview of the actions Terraform plans to take. In this case, you will see two resources (one pod + one service) in the output. As the number of infrastructure and application resources expands, the terraform plan command becomes useful for understanding impact and rollout effect during updates and changes. Run terraform plan now:

$ terraform plan

# ...

  + kubernetes_pod.echo
      metadata.#:                                  "1"
      metadata.0.generation:                       "&lt;computed&gt;"
      metadata.0.labels.%:                         "1"
      metadata.0.labels.App:                       "echo"
      metadata.0.name:                             "echo-example"
      metadata.0.namespace:                        "default"
      spec.#:                                      "1"
      spec.0.container.#:                          "1"
      spec.0.container.0.args.#:                   "2"
      spec.0.container.0.args.0:                   "-listen=:80"
      spec.0.container.0.args.1:                   "-text='Hello World'"
      spec.0.container.0.image:                    "hashicorp/http-echo:latest"
      spec.0.container.0.image_pull_policy:        "&lt;computed&gt;"
      spec.0.container.0.name:                     "example2"
      spec.0.container.0.port.#:                   "1"
      spec.0.container.0.port.0.container_port:    "80"
...

  + kubernetes_service.echo
      load_balancer_ingress.#:     "&lt;computed&gt;"
      metadata.#:                  "1"
      metadata.0.generation:       "&lt;computed&gt;"
      metadata.0.name:             "echo-example"
      metadata.0.namespace:        "default"
      metadata.0.resource_version: "&lt;computed&gt;"
      metadata.0.self_link:        "&lt;computed&gt;"
      metadata.0.uid:              "&lt;computed&gt;"
      spec.#:                      "1"
      spec.0.cluster_ip:           "&lt;computed&gt;"
      spec.0.port.#:               "1"
      spec.0.port.0.node_port:     "&lt;computed&gt;"
      spec.0.port.0.port:          "80"
      spec.0.port.0.protocol:      "TCP"
      spec.0.port.0.target_port:   "80"
      spec.0.selector.%:           "1"
      spec.0.selector.App:         "echo"
      spec.0.session_affinity:     "None"
      spec.0.type:                 "LoadBalancer"

Plan: 2 to add, 0 to change, 0 to destroy.

The terraform plan command never modifies resources; it is purely a dry-run. To apply these changes, run terraform apply. This command will create resources (via API), handle ordering, failures, and conditionals. Additionally, terraform apply will block until all resources have finished provisioning. Run terraform apply now:

$ terraform apply

kubernetes_pod.echo: Creating...
...
kubernetes_pod.echo: Creation complete (ID: default/echo-example)
kubernetes_service.echo: Creating...
...
kubernetes_service.echo: Still creating... (10s elapsed)
kubernetes_service.echo: Still creating... (20s elapsed)
kubernetes_service.echo: Still creating... (30s elapsed)
kubernetes_service.echo: Still creating... (40s elapsed)
kubernetes_service.echo: Creation complete (ID: default/echo-example)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

# ...

Outputs:

lb_ip = 35.197.9.247

To verify the application is running, use curl from your terminal:

$ curl -s $(terraform output lb_ip)

If everything worked as expected, you will see the text hello world.

The Kubernetes UI provides another way to check both the Pod and the Service are there once they are scheduled.

Updating Application

Over time the need to deploy a new version of your application comes up. The easiest way to perform an upgrade when you deploy a new version is to change image field in the config accordingly.

resource "kubernetes_pod" "example" {
  # ...

  spec {
    container {
      image = "hashicorp/http-echo:0.2.3"
  # ...
}

To verify the changes Terraform will make, run terraform plan and inspect the output. This will also verify that no one else on the team modified the resource created earlier.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...

kubernetes_pod.echo: Refreshing state... (ID: default/echo-example)
kubernetes_service.echo: Refreshing state... (ID: default/echo-example)

...

  ~ kubernetes_pod.echo
      spec.0.container.0.image: "hashicorp/http-echo:0.2.1" =&gt; "hashicorp/http-echo:0.2.3"

Plan: 0 to add, 1 to change, 0 to destroy.

Then apply the changes:

$ terraform apply

kubernetes_pod.echo: Refreshing state... (ID: default/echo-example)
kubernetes_service.echo: Refreshing state... (ID: default/echo-example)
kubernetes_pod.echo: Modifying... (ID: default/echo-example)
  spec.0.container.0.image: "hashicorp/http-echo:0.2.1" =&gt; "hashicorp/http-echo:0.2.3"
kubernetes_pod.echo: Modifications complete (ID: default/echo-example)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Upon completion, the Pod will start a new container and kill the old one.

Conclusion

Terraform provides organizations with infrastructure as code, cloud platform management, and ability to create modules for self-service infrastructure. This is one example of how Terraform can help manage infrastructure and resources necessary to run a Kubernetes cluster and schedule the underlying resources.

For more information, check out the complete guide for Managing Kubernetes with Terraform.

For more information on HashiCorp Terraform, visit hashicorp.com/terraform.html.

Kubernetes指南-倪朋飞.pptx 1-唐继元Kubernetes Master High Availability 高级实践.pdf 2、刘淼-基于 DevOps、微服务及k8s的高可用架构探索与实现.pdf ArchSummit北京2016-《网易蜂巢基于万节点Kubernets支撑大规模云应用实践》-刘超.pdf Azure Service Broker_cn - Rita Zhang.pdf google/ HPE李志霄 Kubernetes企业级容器云:加速数字创新-20170407.pdf IBM马达:Kubernetes 中基于策略的资源分配.pdf k8s资料.rar Kubernetes Cookbook-Packt Publishing(2016).pdf Kubernetes Microservices with Docker-Apress2016.pdf Kubernetes on Azure - Gabe Monroy.pdf Kubernetes1.6集群部署完全指南——二进制文件部署开启TLS基于CentOS7.pdf Kubernetes1-4版本新增加功能介绍.pdf Kubernetes监控与日志.pdf kubernetes容器云平台实践-李志伟v1.0.pdf Kubernetes生态系统现状报告.pdf Kubernetes下API网关的微服务实践 长虹集团-李玮演讲PPT.pdf Kubernetes与EcOS的碰撞结合 成都精灵云-张行才演讲PPT.pdf Kubernetes与OpenStack融合支撑企业级微服务架构.pdf Kubernetes在华为全球IT系统中的实践.pdf Kubernetes在企业中的场景运用及管理实践.pdf Kubernetes指南-倪朋飞.pdf Kubernetes指南-倪朋飞.pptx l.txt Lessons+learned+and+challenges+faced+while+running+Kubernetes+at+scale.pdf rkt与Kubernetes的深度融合.pdf rkt与Kubernetes的深度融合.pptx SACC2017FabricOnKubernetesChinese.pdf ThoughtWorks林帆-白话Kubernetes网络.pdf 百度云PaddlePaddle on kubernetes-周倜.pdf 从Borg到Kubernetes-PaaS产品设计-华为-钟成.pdf 改造Kuberntetes打造SAE容器云.pdf 跟谁学-基于容器的持续集成平台建设.pdf 谷歌深度学习在Kubernetes上的实践.pptx 惠普基于Kubernetes的容器私有云平台实践.pdf 基于Kubernetes的模板化应用编排.pdf 基于kubernetes的容器云平台设计与实践-邓德源.pdf 基于Kubernetes的私有容器云建设实践-易宝支付.pdf 基于Kubernetes构建AI业务生态.pdf 李波:小米生态云应用引擎实践.pdf 魅族容器云平台基于 k8s 的自动化运维实践-曾彬.pdf 欧昌华-基于 Nginx 的负载均衡器在 K8S 中的实践.pdf 彭超:瓜子云的落地.pdf 如何落地TensorFlow on Kubernetes.pdf 如何用OpenStack和Kubernetes快速搭建一个容器和虚拟机组合服务的云平台.pptx 孙杰:大型企业云平台架构演进的实践之路.pdf 微服务道与术-敖小剑.pdf 微软Azure云助力微服务-赵文婧.pdf 颜卫-腾讯云容器服务基于kubernetes的应用编排实践-final-v1.0.pptx 有容云邓绍军-Kubernetes落地实践.pptx 折800如何用Docker&Kubernetes;构建自动化测环境.pdf
课程介绍 【完善体系+精品资料】本课程总计115课时,打造全网最全的微服务体系课程;从微服务是什么、能够做什么开始讲起,绝对零基础入门到精通类型。课程整体脉络十分清晰,每个章节一个知识点,画图+源码+运行讲解,不信你学不会。1、课程先讲解了什么是单体架构、什么是微服务架构、他们之间有什么区别和联系,各自有什么优缺点。2、从本质入手,使用最简单的Spring Boot搭建微服务,让你认清微服务是一种思想和解决问题的手段,而不是新兴技术。3、讲解Spring Boot 与 Spring Cloud 微服务架构之间的联系,原生的RestTemplate工具,以及Actuator监控端点的使用。4、带着微服务所带来的各种优缺点,为大家引入服务发现与注册的概念和原理,从而引入我们的第一个注册中心服务Eureka。5、引入负载均衡的理念,区分什么是服务端负载均衡,什么是客户端负载均衡,进而引入Ribbon负载均衡组件的详细使用。6、为了解决微服务之间复杂的调用,降低代码的复杂度,我们引入了Feign声明式客户端,让你几行代码学习服务的远程调用。7、为了解决服务之间的稳定性,避免发生雪崩问题,我们引入了Hystrix断路器,服务降级和熔断机制。8、微服务集群十分庞大,监控起来是十分困难的,尤其是对每一个接口的熔断情况进行监控,因此我们引入了Turbine微服务监控。9、微服务的调用是杂乱无章的,可以网状调用,怎么做到统一的入口出口,统一的授权、加密、解密、日志过滤,我们引入了第一代网关Zuul。10、微服务的配置分散,每次要修改配置都要重启服务,因此我们引入了Config配置中心。11、跟上主流,Consul是当前主流的服务注册与发现、配置中心一体化的解决方案。12、阿里的Nacos服务注册与发现、配置中心在国内炙手可热,Nacos 经历过双十一的微服务中间件。13、Turbin做微服务监控还是太弱,我们需要更强大,可视化,操作性更强的监控系统,因此我引入了Spring Boot Admin体系。14、Zuul已经停止更新支持,Spring Cloud官方推荐的二代网关Spring Cloud Gateway更加强大。15、微服务的安全架构体系虽然复杂,但是是有学习条例的,什么是认证授权、什么是OAuth2.0的原理、 JWT、怎么样去开发实现。 课程资料 【独家资料】1、课程附带全部63个项目源码,其中Hoxton版本项目源码37个,Edgware版本项目26个,2、230页高清PDF正版课件。3、附带nacos、consul、cmder等视频配套软件。学习方法1、每一节课程均有代码,较好的方式为一边听我的讲解,一边使用我提供的项目代码进行观察和运行。2、课程体系庞大,但是并不杂乱,每个章节只针对一个知识点,减轻学习压力。3、坚持每天学习1~2个章节,可以在地铁、公交上用手机学习。【完善知识体系图】
GNU Make 是一个广泛使用的项目管理工具,它可以帮助我们自动化构建和管理项目。它的核心思想是基于规则来定义项目的构建过程。 首先,我们需要在项目的根目录中创建一个名为“Makefile”的文件,用于定义项目的规则和依赖关系。Makefile 是用来告诉 GNU Make 如何构建和更新项目的文件,它由一系列规则组成。 每个规则由一个目标(target)和相应的依赖列表组成。目标是指我们希望生成的文件或执行的操作,而依赖列表则表示生成目标所需要的文件或操作。当某个目标的依赖发生变化时,GNU Make 将会自动检测并更新相应的目标。 在 Makefile 中,我们可以使用一些预定义的变量来简化配置,如 CC 表示编译器,CFLAGS 表示编译选项等。我们还可以定义自己的变量,以便在规则中使用。 通过定义规则和依赖关系,我们可以利用 GNU Make 来自动构建项目。当我们运行 make 命令时,GNU Make 将会读取 Makefile,并根据规则和依赖关系来判断哪些目标需要重新构建,然后执行相应的命令。 GNU Make 还支持一些高级特性,如条件判断、循环、递归等,这使得我们可以根据不同的情况来定义不同的规则和行为。 总之,GNU Make 是一个强大而灵活的项目管理工具,它允许我们根据项目的需求来定义规则和依赖关系,并自动化构建过程,提高项目的开发效率和可维护性。无论是小型项目还是大型项目,GNU Make 都是一个极为有用的工具。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值