原文:
annas-archive.org/md5/630da8346cd37d456c6937b0e86b9f60译者:飞龙
前言
在我开始着手编写《DevOps 2.3 工具包:Kubernetes》之后不久,我意识到一本书只能触及表面。Kubernetes 是一个庞大的主题,没有一本书能够涵盖所有的核心组件。如果我们加上社区项目,范围就更为广泛。接着我们需要考虑主机托管商和设置、管理 Kubernetes 的不同方式。这势必会引导我们探讨像 OpenShift、Rancher 和 DockerEE 这样的第三方解决方案,仅仅是其中的一部分。事情并不会就此结束。我们还需要探讨与网络和存储相关的其他类型的社区和第三方扩展。更别提像持续交付和部署这样的流程了。所有这些内容无法在一本书中深入探讨,因此*《DevOps 2.3 工具包:Kubernetes》*最终成为了 Kubernetes 的入门书籍。它可以作为进一步探索其他内容的基础。
当我发布了《DevOps 2.3 工具包:Kubernetes》的最后一章时,我就开始着手准备下一本书的内容。许多想法和尝试从中涌现出来。直到主题和即将出版的书籍形式逐渐成型,花了我一段时间。在与前一本书的读者们多次商讨之后,我们决定深入探讨在 Kubernetes 集群中的持续交付和部署过程。你现在正在阅读的这本书的大致范围就这样诞生了。
第一章:概述
就像我写的其他书一样,这本书并没有固定的范围。我没有从索引开始,也没有为每一章写总结以试图界定范围。我不做这些事情。这里只设定了一个高层次的目标:探讨Kubernetes 集群内部的持续交付和部署。不过,我确实设定了几个指导原则。
第一个指导原则是所有示例将在所有主要的 Kubernetes 平台上进行测试。嗯,这可能有些不切实际。我知道,任何提到“所有”和“Kubernetes”的句子都可能是错误的。新平台像雨后春笋般层出不穷。不过,我肯定能做到的是选择一些最常用的平台。
Minikube和Docker for Mac 或 Windows无疑应当在内,适合那些喜欢在本地“玩”Docker 的人。
AWS 是最大的托管服务提供商,因此**Kubernetes Operations(kops)**必须包括在内。
由于只覆盖非托管云环境是愚蠢的,我不得不包括托管的 Kubernetes 集群。**Google Kubernetes Engine (GKE)是显而易见的选择。它是最稳定、功能最丰富的托管 Kubernetes 解决方案。将 GKE 纳入其中意味着 Azure 容器服务(AKS)和Amazon 的弹性容器服务(EKS)**也应当包括在内,这样我们就能拥有提供托管 Kubernetes 的三大云供应商。不过,尽管 AKS 已经可用,但截至 2018 年 6 月,它仍然不稳定,且缺少许多功能。因此,我被迫将三大平台缩小为 GKE 和 EKS 这对托管 Kubernetes 代表。
最后,可能还需要包括一个本地部署解决方案。由于OpenShift在这方面表现出色,选择相对容易。
总的来说,我决定在本地使用 minikube 和 Docker for Mac 进行测试,将 AWS 与 kops 作为云中集群的代表,使用 GKE 来测试托管的 Kubernetes 集群,以及使用 OpenShift(搭配 minishift)作为潜在的本地解决方案。光是这一点就构成了一个真正的挑战,可能超出我的能力范围。不过,确保所有示例都能在这些平台和解决方案上运行,应该会提供一些有价值的见解。
你们中的一些人已经选择了自己使用的 Kubernetes 版本。其他人可能仍在犹豫是否采用其中之一。虽然不同 Kubernetes 平台的比较并不是本书的主要内容,但我会尽力在有需要时解释它们之间的差异。
总结一下这些指导原则,它探讨了使用 Jenkins 在 Kubernetes 中进行持续交付和部署。所有示例将在minikube、Docker for Mac(或 Windows)、AWS 与 kops、GKE、OpenShift 与 minishift 以及 EKS上进行测试。
当我写完上一段时,我意识到我正在重复过去的错误。我从看似合理的范围开始,最终却变成了更大更长的内容。我能否遵循所有这些指南?说实话,我不知道。我会尽力而为。
我本应该按照“最佳实践”在最后写概述,但我没有这么做。相反,您现在读到的是关于这本书的计划,而不是最终的结果。这不是概述。您可以把它当作日记的第一页。故事的结局仍然未知。
最终,您可能会遇到困境,需要帮助。或者您可能想写个评论或对书的内容发表意见。请加入DevOps20 Slack 频道,发布您的想法,提出问题,或者参与讨论。如果您更喜欢一对一的沟通,您可以通过 Slack 给我发私信,或者发送电子邮件到 viktor@farcic.com。我写的所有书对我来说都非常珍贵,我希望您能有愉快的阅读体验。体验的一部分就是能够联系到我。别害羞。
请注意,这本书和之前的书籍一样,都是自费出版的。我相信没有中介介入作者和读者之间是最好的方式。这让我能更快地写作,更频繁地更新书籍,并且能与您进行更直接的沟通。您的反馈是这一过程的一部分。无论您是在书中的章节还未完成时购买,还是所有章节都已写完,想法是它永远不会真正完成。随着时间的推移,它将需要更新,以便与技术或流程的变化保持一致。在可能的情况下,我会尽量保持它的更新,并在适当的时候发布更新。最终,情况可能会变化得如此之大,以至于更新不再是一个好的选择,那将是需要一本全新书籍的信号。只要我继续得到您的支持,我将一直写下去。
第二章:读者群体
本书探讨了如何将持续部署应用于 Kubernetes 集群。它涵盖了多种 Kubernetes 平台,并提供了如何在一些最常用的 CI/CD 工具上开发流水线的指导。
本书假设你已经熟悉 Kubernetes。假设你已经熟练掌握了 Deployments、ReplicaSets、Pods、Ingress、Services、PersistentVolumes、PersistentVolumeClaims、Namespaces 等一些基本概念和功能。本书不打算涵盖这些基础内容,至少不会全部讲解。本书假设读者具备一定的 Kubernetes 知识和实践经验。如果你不具备这些前提,接下来的内容可能会让你感到困惑且较为高级。请先阅读 The DevOps 2.3 Toolkit: Kubernetes,或者参考 Kubernetes 文档。完成后再回来,确保你至少理解了 Kubernetes 的基本概念和资源类型。
第三章:关于作者
Viktor Farcic 是CloudBees的高级顾问,属于Docker Captains小组成员,并且是作者。
他使用过多种编程语言,从 Pascal(没错,他很老)开始,接着是 Basic(在它有了 Visual 前缀之前),ASP(在它有了.Net 后缀之前),C,C++,Perl,Python,ASP.Net,Visual Basic,C#,JavaScript,Java,Scala 等。他从未使用过 Fortran。他现在最喜欢的是 Go。
他的主要兴趣是微服务、持续部署和测试驱动开发(TDD)。
他经常在社区聚会和会议上发言。
他写了以下书籍:The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices,The DevOps 2.1 Toolkit: Docker Swarm: Building, testing, deploying, and monitoring services inside Docker Swarm clusters,The DevOps 2.2 Toolkit: Self-Sufficient Docker Clusters: Building Self-Adaptive And Self-Healing Docker Clusters,The DevOps 2.3 Toolkit: Kubernetes: Deploying and managing highly-available and fault-tolerant applications at scale,以及 Test-Driven Java Development。
他的随想和教程可以在他的博客 TechnologyConversations.com 上找到。
第四章:奉献
献给萨拉,这个在这个世界上真正重要的人。
第五章:先决条件
每一章都会假设你已经有一个正在运行的 Kubernetes 集群。无论它是一个本地运行的单节点集群,还是一个完全可操作的类似生产环境的集群,都无关紧要。重要的是你至少有一个集群。
我们不会详细讨论如何创建 Kubernetes 集群。我相信你已经知道如何操作,并且在你的笔记本电脑上安装了 kubectl。如果不是这种情况,你可能需要阅读附录 A: 安装 kubectl 并使用 minikube 创建集群。虽然 minikube 非常适合运行本地单节点 Kubernetes 集群,但你可能希望在一个更接近生产环境的集群中尝试一些想法。我希望你已经在 AWS、GKE、DigitalOcean、本地或其他地方运行了一个“真正的” Kubernetes 集群。如果没有,而且你不知道如何创建一个集群,请阅读 附录 B: 使用 Kubernetes Operations (kops)。它会提供你准备、创建和销毁集群所需的足够信息。
尽管 附录 A 和 附录 B 解释了如何在本地和 AWS 上创建 Kubernetes 集群,你不必仅仅局限于在本地使用 minikube 或在 AWS 上使用 kops。我尽力提供了关于一些常用 Kubernetes 集群变种的指导。
在大多数情况下,相同的示例和命令将在所有测试过的组合中有效。当无法直接适用时,你将看到说明,解释如何在你偏好的 Kubernetes 和托管环境中完成相同的操作。即使你使用的是其他平台,你也不应该有问题调整命令和规格以适应你的平台。
每一章都会包含一小段要求,说明你的 Kubernetes 集群需要满足哪些条件。如果你对某些要求不确定,我准备了一些 Gists,列出了我用来创建集群的命令。由于每一章可能需要不同的集群组件和规模,用于设置集群的 Gists 可能会有所不同。请将它们作为指南,而不必严格按照这些命令执行。毕竟,本书假设你已经具备一些 Kubernetes 知识。如果你从未创建过 Kubernetes 集群,却又声称自己不是 Kubernetes 新手,那就有些说不过去了。
简而言之,先决条件是具备 Kubernetes 的实际操作经验,以及至少一个 Kubernetes 集群。
第六章:《老年人的低语》
我花费了大量时间帮助公司改善他们的系统。我工作中最具挑战性的一部分是,在一次合作结束后回到家,知道下次再次拜访同一家公司时,我会发现没有任何实质性的改进。我不能说这完全不是我的错,确实是。可能我在做的事情上并不够好,或者我不擅长传达正确的信息,也许我的建议是错误的。造成这些失败的原因有很多,我承认它们大部分可能是我的错。然而,我还是无法摆脱一种感觉,觉得我的失败是由其他原因引起的。我认为根本原因在于错误的期望。
人们想要进步,这是我们天性的一部分。或者至少,大多数人是如此。我们成为工程师是因为我们充满好奇心。我们喜欢玩新的玩具,喜欢探索新的可能性。然而,随着我们在公司工作时间的增长,我们变得越来越自满。我们学到一些东西,然后就停止学习。我们的重点转向了爬升公司阶梯。时间一长,我们越来越注重捍卫自己的地位,而这通常意味着维持现状。
我们在某个领域变得非常专业,这种专业知识让我们获得了荣耀,并希望能带来一次或两次晋升。从那以后,我们就靠着这种荣耀前行。看我,我是 DB2 专家,没错,就是我,设立了 VMWare 虚拟化。我把 Spring 的好处带到了我们的 Java 开发中。 一旦发生这种情况,我们往往会试图确保这些好处永远保持不变。我们不会转向 NoSQL,因为那意味着我的 DB2 专业知识将不再那么有价值。我们不会转向云计算,因为我是 VMWare 背后的大师。我们不会采用 Go,因为我知道如何用 Java 编程。
这些声音很重要,因为它们是由资深人士发出的。每个人都需要听他们说话,尽管这些声音背后的真正动机往往是自私的。它们并非基于实际的知识,而是基于反复的经验。拥有二十年 DB2 经验并不等于二十年的进步,而是将同样的经验重复了二十次。然而,二十年的经验是有分量的。人们听你说话,并不是因为他们信任你,而是因为你资历深厚,管理层相信你有能力做出决策。
将老一辈的声音与管理层对未知的恐惧以及他们对短期利益的追求相结合,结果往往是维持现状。那样做有效好多年了,为什么要改变呢?为什么要听一个初级开发人员告诉我该做什么? 即使改变的倡导得到了像 Google、Amazon 和 Netflix 等巨头的经验支持(仅举几例),你也可能会得到类似以下的回应:“我们不一样。” “这里不适用。” “我想做那件事,但由于一些我并不完全理解的规定,阻止了我做任何改变。”
尽管如此,迟早会有改变的指令传来。你的 CTO 可能去了 Gartner 会议,在那里他被告知要转向微服务。太多人谈论敏捷,管理层不可能忽视。DevOps 是个大趋势,所以我们也需要实施它。Kubernetes 无处不在,所以我们很快就会开始做一个 PoC。
当这些事情真的发生时,当一个改变获得批准时,你可能会欣喜若狂。这是你的时刻。这时你会开始做一些令人兴奋的事情。往往正是这个时刻,我会接到电话。“我们想做这个和那个,你能帮忙吗?” 我通常(并非总是)会答应。这就是我的工作。尽管如此,我知道我的参与不会带来实质性的改进。我猜希望总是最后死去。
我为什么这么悲观?为什么我认为改进并不会带来切实的好处?答案在于所需改变的范围。
几乎每个工具都是特定流程的结果。另一方面,一个流程是某种文化的产物。如果在没有文化改变的情况下采用一个流程,那就是浪费时间。如果在没有理解背后流程的情况下采用工具,那也是徒劳的努力,结果只会是浪费时间,并可能导致可观的许可费用。在一些罕见的情况下,企业确实选择接受改变三者(文化、流程和工具)的需要。他们做出了决定,有时他们甚至开始朝着正确的方向迈进。这些是珍贵的案例,值得珍惜。但它们也可能失败。过了一段时间,通常是几个月后,我们会意识到这些改变的范围。只有勇敢的人才能生存下来,只有那些坚定的人才能坚持到底。
那些选择继续前进并真正改变他们的文化、流程和工具的人,会意识到它们与他们多年来开发的应用程序不兼容。容器与所有东西兼容,但当开发微服务时,效果才真正显著,而不是单体应用。测试驱动开发提高了信心、质量和速度,但前提是应用程序设计必须具有可测试性。零停机时间部署并不是神话。它们确实有效,但前提是我们的应用程序是云原生的,至少遵循了十二因素等标准,等等。
这不仅仅是关于工具、流程和文化,还包括摆脱你多年来积累的技术债务。说到债务,我不一定是指你开始时做错了什么,而是时间把一些曾经很棒的东西变成了可怕的怪物。你是否花费了百分之五十的时间进行重构?如果没有,你就在积累技术债务。这是不可避免的。
当面对所有这些挑战时,放弃是预期的结果。当隧道尽头没有一丝光明时,扔下毛巾是人之常情。我不会责怪你。我感同身受。你没有前进,因为障碍太大了。但你必须站起来,因为没有其他选择。你会继续前行。你会进步。这会很痛苦,但除了你的竞争对手看着你即将死去的躯壳时缓慢死去之外,别无选择。
你已经走到了这一步,我只能假设有两种可能的解释。你是那些通过阅读技术书籍来逃避现实的人之一,或者你至少应用了我们迄今所讨论的一些内容。我希望是后者。如果是这样,你就不会成为“我又失败了”的又一个例子。我为此感谢你。这让我感觉好一些。
如果你真正应用了这本书的教训,而不是假装,你确实在做一件伟大的事情。没有办法假装持续交付(CD)。如果所有阶段都通过,你每次提交的代码都可以立即投入生产。是否部署到生产环境是基于业务或市场需求的决策,而非技术上的。你甚至可以更进一步,实践持续部署(CDP)。它消除了人为执行的唯一动作,并将每个通过的代码提交都部署到生产环境。这两者都不能假装。你不能部分地进行 CD 或 CDP。你不能几乎到达目标。如果你做到了,你只是在进行持续集成、最终会被部署的过程,或者其他什么操作。
总而言之,希望你已经准备好了。你将在 Kubernetes 集群内迈出实施持续部署的一步。在本书结束时,你唯一剩下的事情就是花费未知的时间“现代化”你的应用架构,或者将它们抛弃重来。你将改变你的工具、流程和文化。本书不会帮助你处理所有这些。我们专注于工具和流程。你将不得不自行解决文化和架构的问题。测试的方式也是如此。我不会教你测试,也不会宣扬 TDD。我假设你已经了解所有这些,我们可以专注于持续部署管道。
此时,你可能感到绝望。你可能还没准备好。你可能认为你的管理层没有同意,你公司的人员不会接受这个方向,或者你没有足够的时间和资金。不要沮丧。知道路径是最关键的部分。即使你不能立即到达那里,你也应该知道目的地是什么,这样你的步骤至少是朝着正确的方向迈进。
什么是持续部署?
就这样。这就是你能找到的最简短、最准确的持续部署定义。对你来说这是不是太多了?如果你认为你永远不能(或不应该)做到这一点,那我们可以退回到持续交付。
持续部署(CDP)和持续交付(CD)之间唯一实质性的区别是,前者将代码部署到生产环境,而后者需要我们选择要部署到生产环境的提交。这样不就简单多了吗?实际上并不是。其实几乎一样,因为在这两种情况下,我们都对流程有足够信心,认为每个提交都可以部署到生产环境。在持续交付的情况下,我们(人类)确实需要决定部署什么内容。但这正是导致重大混乱的原因。接下来是关键部分,请认真阅读。
如果每次提交(没有失败管道的提交)都可以部署到生产环境,那么就不需要工程师决定什么内容会被部署。每个提交都可以部署,我们只是可能不希望功能立刻对用户可用。这是一个商业决策。就这样。
作为一种学习经验,你应该让公司里技术能力最弱的人坐到屏幕前,查看构建情况,并让他(或她)选择要部署的版本。清洁服务的工作人员就是一个很好的候选人。在那个人点击按钮部署一个随机版本之前,你需要离开那个房间。
这里有个关键问题。你在那种情况下会有什么感受?如果你可以自信地去最近的酒吧喝咖啡,确信什么问题也不会发生,那么你就处在正确的位置。如果你会因此产生焦虑,那你离那里还很远。如果是这种情况,不必绝望。大多数人也会因为让一个随机的人部署一个随机版本而感到焦虑。但这并不重要。重要的是你是否想要达到那个目标。如果你想,那么继续往下读。如果你不想,那希望你只是在阅读本书的免费样章,并能做出明智的决定,避免浪费钱。换一本书读吧。
“等一下,”你可能会说,“我已经在做持续集成了,”你现在可能会这样想。“那么,持续交付或持续部署真的那么不同吗?”好吧,答案是它们并没有本质不同,但你可能误解了持续集成的定义。我甚至不打算给你定义它。自从持续集成成为一种实践已有超过十五年了。不过,我会问你几个问题。如果你对其中至少一个问题的回答是“否”,那么你并没有在做持续集成。开始吧。
-
你是否在每次提交时都在构建并至少部分测试你的应用程序,无论该提交被推送到哪个分支?
-
每个人每天至少提交一次吗?
-
如果你在几天后才将分支合并到master,而不是更频繁地合并,你会怎么做?
-
当构建失败时,你是否会停下正在做的事情去修复?这是否是最高优先级的任务(仅次于火灾紧急情况、地震和其他威胁生命的事件)?
就是这些。这些是你需要回答的唯一问题。对自己诚实一下。你真的对这四个问题都回答了“是”吗?如果是的话,你就是我的英雄。如果不是,剩下的只有一个问题需要回答。
你真的想做持续集成(CI)、持续交付(CD)或持续部署(CDP)吗?
第七章:大规模部署有状态应用程序
大多数应用程序通过部署到 Kubernetes 使用 Deployments。毫无疑问,它是最常用的控制器。Deployments 提供了我们可能需要的(几乎)所有功能。当我们的应用需要扩展时,我们可以指定副本数。我们可以通过 PersistentVolumeClaims 挂载卷。我们可以通过 Services 与由 Deployments 控制的 Pods 进行通信。我们可以执行滚动更新,在没有停机的情况下部署新版本。Deployments 启用了许多其他功能。这是否意味着 Deployments 是运行所有类型应用程序的首选方式?是否有某个功能是我们需要的,但 Deployments 和我们可以与其关联的其他资源中并没有提供?
在 Kubernetes 中运行有状态应用程序时,我们很快意识到 Deployments 并不能提供我们所需要的所有功能。并不是说我们需要额外的功能,而是 Deployments 中的一些功能并不完全按我们期望的方式运作。在许多情况下,Deployments 非常适合无状态应用程序。然而,我们可能需要寻找一个不同的控制器,它能让我们安全高效地运行有状态应用程序。这个控制器就是 StatefulSet。
让我们体验一下有状态应用程序背后的某些问题,以及 StatefulSets 带来的好处。为了做到这一点,我们需要一个集群。
我们暂时跳过理论,直接进入示例。为了做到这一点,我们需要一个集群。
创建集群
我们将通过克隆 vfarcic/k8s-specs 仓库开始动手演示,这个仓库包含了本书中将使用的所有示例定义。
`1` git clone `\`
`2 ` https://github.com/vfarcic/k8s-specs.git
`3`
`4` `cd` k8s-specs
Now that you have a repository with the examples we’ll use throughout the book, we should create a cluster unless you already have one. For this chapter, I’ll assume that you are running a cluster with **Kubernetes version 1.9** or higher. Further on, I’ll assume that you already have an **nginx Ingress Controller** deployed, that **RBAC** is set up, and that your cluster has a **default StorageClass**. If you are unsure about some of the requirements, I prepared a few Gists with the commands I used to create different clusters. Feel free to choose whichever suits you the best, or be brave and roll with your own. Ideally, you’ll run the commands from every chapter on each of the Kubernetes flavors. That way, you’ll not only learn the main subject but also gain experience in running Kubernetes in different combinations and, hopefully, make a more informed decision which flavor to use for your local development as well as for production. The Gists with the commands I used to create different variations of Kubernetes clusters are as follows. * [docker4mac.sh](https://gist.github.com/06e313db2957e92b1df09fe39c249a14): **Docker for Mac** with 2 CPUs, 2GB RAM, and with **nginx Ingress** controller. * [minikube.sh](https://gist.github.com/536e6329e750b795a882900c09feb13b): **minikube** with 2 CPUs, 2GB RAM, and with `ingress`, `storage-provisioner`, and `default-storageclass` addons enabled. * [kops.sh](https://gist.github.com/2a3e4ee9cb86d4a5a65cd3e4397f48fd): **kops in AWS** with 3 t2.small masters and 2 t2.medium nodes spread in three availability zones, and with **nginx Ingress** controller (assumes that the prerequisites are set through Appendix B). * [minishift.sh](https://gist.github.com/c9968f23ecb1f7b2ec40c6bcc0e03e4f): **minishift** with 2 CPUs, 2GB RAM, and version 1.16+. * [gke.sh](https://gist.github.com/5c52c165bf9c5002fedb61f8a5d6a6d1): **Google Kubernetes Engine (GKE)** with 3 n1-standard-1 (1 CPU, 3.75GB RAM) nodes (one in each zone), and with **nginx Ingress** controller running on top of the “standard” one that comes with GKE. We’ll use nginx Ingress for compatibility with other platforms. Feel free to modify the YAML files if you prefer NOT to install nginx Ingress. * [eks.sh](https://gist.github.com/5496f79a3886be794cc317c6f8dd7083): **Elastic Kubernetes Service (EKS)** with 2 t2.medium nodes, with **nginx Ingress** controller, and with a **default StorageClass**. ### Using StatefulSets To Run Stateful Applications Let’s see a StatefulSet in action and see whether it beings any benefits. We’ll use Jenkins as the first application we’ll deploy. It is a simple application to start with since it does not require a complicated setup and it cannot be scaled. On the other hand, Jenkins is a stateful application. It stores all its state into a single directory. There are no “special” requirements besides the need for a PersistentVolume. A sample Jenkins definition that uses StatefulSets can be found in `sts/jenkins.yml`. ````1` cat sts/jenkins.yml ``````````````````````````````````````````````````````````````The definition is relatively straightforward. It defines a Namespace for easier organization, a Service for routing traffic, and an Ingress that makes it accessible from outside the cluster. The interesting part is the `StatefulSet` definition. The only significant difference, when compared to Deployments, is that the `StatefulSet` can use `volumeClaimTemplates`. While Deployments require that we specify PersistentVolumeClaim separately, now we can define a claim template as part of the `StatefulSet` definition. Even though that might be a more convenient way to define claims, surely there are other reasons for this difference. Or maybe there isn’t. Let’s check it out by creating the resources defined in `sts/jenkins.yml`. ````1` kubectl apply `\` `2 ` -f sts/jenkins.yml `\` `3 ` --record `````````````````````````````````````````````````````````````We can see from the output that a Namespace, an Ingress, a Service, and a StatefulSet were created. In case you’re using minishift and deployed the YAML defined in `sts/jenkins-oc.yml`, you got a Route instead Ingress. Let’s confirm that the StatefulSet was rolled out correctly. ````1` kubectl -n jenkins `\` `2 ` rollout status sts jenkins Now that jenkins StatefulSet is up and running, we should check whether it created a PersistentVolumeClaim. 1` kubectl -n jenkins get pvc ```````````````````````````````````````````````````````````The output is as follows. 1NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE2jenkins-home-jenkins-0 Bound pvc-... 2Gi RWO gp2 2m ``````````````````````````````````````````````````````````It comes as no surprise that a claim was created. After all, we did specifyvolumeClaimTemplates as part of the StatefulSet definition. However, if we compare it with claims we make as separate resources (e.g., with Deployments), the format of the claim we just created is a bit different. It is a combination of the claim name (jenkins-home), the Namespace (jenkins), and the indexed suffix (0). The index is an indication that StatefulSets might create more than one claim. Still, we can see only one, so we’ll need to stash that thought for a while. Similarly, we might want to confirm that the claim created a PersistentVolume. ````1 kubectl -n jenkins get pv `````````````````````````````````````````````````````````Finally, as the last verification, we’ll open Jenkins in a browser and confirm that it looks like it’s working correctly. But, before we do that, we should retrieve the hostname or the IP assigned to us by the Ingress controller. 1` `CLUSTER_DNS``=``$(`kubectl -n jenkins `\` `2 ` get ing jenkins `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].hostname}"``)` `4` `5` `echo` `$CLUSTER_DNS` ````````````````````````````````````````````````````````We retrieved the hostname (or IP) from the Ingress resource, and now we are ready to open Jenkins in a browser. 1` open `“http://``KaTeX parse error: Expected 'EOF', got '#' at position 1617: …d scenario. #̲## Using Deploy…(`kubectl -n go-demo-3 get pods `` `2 ` -l `app``=`db `` `3 ` -o `jsonpath``=``”{.items[0].metadata.name}“``)` `4` `5` `DB_2``=``KaTeX parse error: Expected group as argument to '\`' at position 36: …-3 get pods `\` ̲`6 ` -l `app…DB_1` ````````````````````````````````````````````````The last lines of the output of the first `db` Pod are as follows. 1` ... `2` 2018-03-29T20:51:53.390+0000 I NETWORK [thread1] waiting for connections on port 27\ `3` 017 `4` 2018-03-29T20:51:53.681+0000 I NETWORK [thread1] connection accepted from 100.96.2.\ `5` 7:46888 #1 (1 connection now open) `6` 2018-03-29T20:51:55.984+0000 I NETWORK [thread1] connection accepted from 100.96.2.\ `7` 8:49418 #2 (2 connections now open) `8` 2018-03-29T20:51:59.182+0000 I NETWORK [thread1] connection accepted from 100.96.3.\ `9` 6:43940 #3 (3 connections now open) ```````````````````````````````````````````````Everything seems OK. We can see that the database initialized and started `waiting for connections`. Soon after, the three replicas of the `api` Deployment connected to MongoDB running inside this Pod. Now that we know that the first Pod is the one that is running, we should look at the logs of the second. That must be one of those with errors. 1` kubectl -n go-demo-3 logs `$DB_2` ``````````````````````````````````````````````The output, limited to the last few lines, is as follows. 1` ... `2` 2018-03-29T20:54:57.362+0000 I STORAGE [initandlisten] exception in initAndListen: \ `3` 98 Unable to lock file: /data/db/mongod.lock Resource temporarily unavailable. Is a \ `4` mongod instance already running?, terminating `5` 2018-03-29T20:54:57.362+0000 I NETWORK [initandlisten] shutdown: going to close lis\ `6` tening sockets... `7` 2018-03-29T20:54:57.362+0000 I NETWORK [initandlisten] shutdown: going to flush dia\ `8` glog... `9` 2018-03-29T20:54:57.362+0000 I CONTROL [initandlisten] now exiting `10` 2018-03-29T20:54:57.362+0000 I CONTROL [initandlisten] shutting down with code:100 `````````````````````````````````````````````There’s the symptom of the problem. MongoDB could not lock the `/data/db/mongod.lock` file, and it shut itself down. Let’s take a look at the PersistentVolumes. 1` kubectl get pv ````````````````````````````````````````````The output is as follows. 1` NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REA\ `2` SON AGE `3` pvc-... 2Gi RWO Delete Bound go-demo-3/mongo gp2 \ `4 ` 3m ```````````````````````````````````````````There is only one bound PersistentVolume. That is to be expected. Even if we’d want to, we could not tell a Deployment to create a volume for each replica. The Deployment mounted a volume associated with the claim which, in turn, created a PersistentVolume. All the replicas tried to mount the same volume. MongoDB is designed in a way that each instance requires exclusive access to a directory where it stores its state. We tried to mount the same volume to all the replicas, and only one of them got the lock. All the others failed. If you’re using kops with AWS, the default `StorageClass` is using the `kubernetes.io/aws-ebs` provisioner. Since EBS can be mounted only by a single entity, our claim has the access mode set to `ReadWriteOnce`. To make thing more complicated, EBS cannot span multiple availability-zones, and we are hoping to spread our MongoDB replicas so that they can survive even failure of a whole zone. The same is true for GKE which also uses block storage by default. Having a `ReadWriteOnce` PersistentVolumeClaims and EBS not being able to span multiple availability zones is not a problem for our use-case. The real issue is that each MongoDB instance needs a separate volume or at least a different directory. Neither of the solutions can be (easily) solved with Deployments. <https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00005.jpeg> Figure 1-1: Pods created through the Deployment share the same PersistentVolume (AWS variation) Now we have a good use-case that might show some of the benefits of StatefulSet controllers. Before we move on, we’ll delete the `go-demo-3` Namespace and all the resources running inside it. 1` kubectl delete ns go-demo-3 ``````````````````````````````````````````### Using StatefulSets To Run Stateful Applications At Scale Let’s see whether we can solve the problem with PersistentVolumes through a StatefulSet. As a reminder, our goal (for now) is for each instance of a MongoDB to get a separate volume. The updated definition is in the `sts/go-demo-3-sts.yml` file. 1` cat sts/go-demo-3-sts.yml `````````````````````````````````````````Most of the new definition is the same as the one we used before, so we’ll comment only on the differences. The first in line is StatefulSet that replaces the `db` Deployment. It is as follows. 1` `apiVersion``:` `apps/v1beta2` `2` `kind``:` `StatefulSet` `3` `metadata``:` `4` `name``:` `db` `5` `namespace``:` `go-demo-3` `6` `spec``:` `7` `serviceName``:` `db` `8` `replicas``:` `3` `9` `selector``:` `10 ` `matchLabels``:` `11 ` `app``:` `db` `12 ` `template``:` `13 ` `metadata``:` `14 ` `labels``:` `15 ` `app``:` `db` `16 ` `spec``:` `17 ` `terminationGracePeriodSeconds``:` `10` `18 ` `containers``:` `19 ` `-` `name``:` `db` `20 ` `image``:` `mongo:3.3` `21 ` `command``:` `22 ` `-` `mongod` `23 ` `-` `”–replSet"` `24 ` `-` `rs0` `25 ` `-` `“–smallfiles”` `26 ` `-` `“–noprealloc”` `27 ` `ports``:` `28 ` `-` `containerPort``:` `27017` `29 ` `resources``:` `30 ` `limits``:` `31 ` `memory``:` `“100Mi”` `32 ` `cpu``:` `0.1` `33 ` `requests``:` `34 ` `memory``:` `“50Mi”` `35 ` `cpu``:` `0.01` `36 ` `volumeMounts``:` `37 ` `-` `name``:` `mongo-data` `38 ` `mountPath``:` `/data/db` `39 ` `volumeClaimTemplates``:` `40 ` `-` `metadata``:` `41 ` `name``:` `mongo-data` `42 ` `spec``:` `43 ` `accessModes``:` `44 ` `-` `ReadWriteOnce` `45 ` `resources``:` `46 ` `requests``:` `47 ` `storage``:` `2Gi` ````````````````````````````````````````As you already saw with Jenkins, StatefulSet definitions are almost the same as Deployments. The only important difference is that we are not defining PersistentVolumeClaim as a separate resource but letting the StatefulSet take care of it through the specification set inside the `volumeClaimTemplates` entry. We’ll see it in action soon. We also used this opportunity to tweak `mongod` process by specifying the `db` container `command` that creates a ReplicaSet `rs0`. Please note that this replica set is specific to MongoDB and it is in no way related to Kubernetes ReplicaSet controller. Creation of a MongoDB replica set is the base for some of the things we’ll do later on. Another difference is in the `db` Service. It is as follows. 1` `apiVersion``:` `v1` `2` `kind``:` `Service` `3` `metadata``:` `4` `name``:` `db` `5` `namespace``:` `go-demo-3` `6` `spec``:` `7` `ports``:` `8` `-` `port``:` `27017` `9` `clusterIP``:` `None` `10 ` `selector``:` `11 ` `app``:` `db` ```````````````````````````````````````This time we set `clusterIP` to `None`. That will create a Headless Service. Headless service is a service that doesn’t need load-balancing and has a single service IP. Everything else in this YAML file is the same as in the one that used Deployment controller to run MongoDB. To summarize, we changed `db` Deployment into a StatefulSet, we added a command that creates MongoDB replica set named `rs0`, and we set the `db` Service to be Headless. We’ll explore the reasons and the effects of those changes soon. For now, we’ll create the resources defined in the `sts/go-demo-3-sts.yml` file. 1` kubectl apply `` `2 ` -f sts/go-demo-3-sts.yml `` `3 ` --record `4` `5` kubectl -n go-demo-3 get pods ``````````````````````````````````````We created the resources and retrieved the Pods. The output of the latter command is as follows. 1` NAME READY STATUS RESTARTS AGE `2` api-... 0/1 Running 0 4s `3` api-... 0/1 Running 0 4s `4` api-... 0/1 Running 0 4s `5` db-0 0/1 ContainerCreating 0 5s `````````````````````````````````````We can see that all three replicas of the `api` Pods are running or, at least, that’s how it seems so far. The situation with `db` Pods is different. Kubernetes is creating only one replica, even though we specified three. Let’s wait for a bit and retrieve the Pods again. 1` kubectl -n go-demo-3 get pods Forty seconds later, the output is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` api-... 0/1 CrashLoopBackOff 1 44s `3` api-... 0/1 CrashLoopBackOff 1 44s `4` api-... 0/1 Running 2 44s `5` db-0 1/1 Running 0 45s `6` db-1 0/1 ContainerCreating 0 9s ```````````````````````````````````We can see that the first `db` Pod is running and that creation of the second started. At the same time, our `api` Pods are crashing. We’ll ignore them for now, and concentrate on `db` Pods. Let’s wait a bit more and observe what happens next. ````1` kubectl -n go-demo-3 get pods ``````````````````````````````````A minute later, the output is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` api-... 0/1 CrashLoopBackOff 4 1m `3` api-... 0/1 Running 4 1m `4` api-... 0/1 CrashLoopBackOff 4 1m `5` db-0 1/1 Running 0 2m `6` db-1 1/1 Running 0 1m `7` db-2 0/1 ContainerCreating 0 34s `````````````````````````````````The second `db` Pod started running, and the system is creating the third one. It seems that our progress with the database is going in the right direction. Let’s wait a while longer before we retrieve the Pods one more time. ````1` kubectl -n go-demo-3 get pods 1` NAME READY STATUS RESTARTS AGE `2` api-… 0/1 CrashLoopBackOff 4 3m `3` api-… 0/1 CrashLoopBackOff 4 3m `4` api-… 0/1 CrashLoopBackOff 4 3m `5` db-0 1/1 Running 0 3m `6` db-1 1/1 Running 0 2m `7` db-2 1/1 Running 0 1m ```````````````````````````````Another minute later, the third `db` Pod is also running but our `api` Pods are still failing. We’ll deal with that problem soon. What we just observed is an essential difference between Deployments and StatefulSets. Replicas of the latter are created sequentially. Only after the first replica was running, the StatefulSet started creating the second. Similarly, the creation of the third began solely after the second was running. Moreover, we can see that the names of the Pods created through the StatefulSet are predictable. Unlike Deployments that create random suffixes for each Pod, StatefulSets create them with indexed suffixes based on integer ordinals. The name of the first Pod will always end suffixed with `-0`, the second will be suffixed with `-1`, and so on. That naming will be maintained forever. If we’d initiate rolling updates, Kubernetes would replace the Pods of the `db` StatefulSet, but the names would remain the same. The nature of the sequential creation of Pods and formatting of their names provides predictability that is often paramount with stateful applications. We can think of StatefulSet replicas as being separate Pods with guaranteed ordering, uniqueness, and predictability. How about PersistentVolumes? The fact that the `db` Pods did not fail means that MongoDB instances managed to get the locks. That means that they are not sharing the same PersistentVolume, or that they are using different directories within the same volume. Let’s take a look at the PersistentVolumes created in the cluster. ````1` kubectl get pv ``````````````````````````````The output is as follows. ````1` NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAG\ `2` ECLASS REASON AGE `3` pvc-… 2Gi RWO Delete Bound go-demo-3/mongo-data-db-0 gp2 \ `4 ` 9m `5` pvc-… 2Gi RWO Delete Bound go-demo-3/mongo-data-db-1 gp2 \ `6 ` 8m `7` pvc-… 2Gi RWO Delete Bound go-demo-3/mongo-data-db-2 gp2 \ `8 ` 7m `````````````````````````````Now we can observe the reasoning behind using `volumeClaimTemplates` spec inside the definition of the StatefulSet. It used the template to create a claim for each replica. We specified that there should be three replicas, so it created three Pods, as well as three separate volume claims. The result is three PersistentVolumes. Moreover, we can see that the claims also follow a specific naming convention. The format is a combination of the name of the claim (`mongo-data`), the name of the StatefulSet `db`, and index (`0`, `1`, and `2`). Judging by the age of the claims, we can see that they followed the same pattern as the Pods. They are approximately a minute apart. The StatefulSet created the first Pod and used the claim template to create a PersistentVolume and attach it. Later on, it continued to the second Pod and the claim, and after that with the third. Pods are created sequentially, and each generated a new PersistentVolumeClaim. If a Pod is (re)scheduled due to a failure or a rolling update, it’ll continue using the same PersistentVolumeClaim and, as a result, it will keep using the same PersistentVolume, making Pods and volumes inseparable. https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00006.jpeg Figure 1-2: Each Pod created through the StatefulSet gets a PersistentVolume (AWS variation) Given that each Pod in a StatefulSet has a unique and a predictable name, we can assume that the same applies to hostnames inside those Pods. Let’s check it out. ````1` kubectl -n go-demo-3 `` `2 ` `exec` -it db-0 – hostname ````````````````````````````We executed `hostname` command inside one of the replicas of the StatefulSet. The output is as follows. ````1` db-0 ```````````````````````````Just as names of the Pods created by the StatefulSet, hostnames are predictable as well. They are following the same pattern as Pod names. Each Pod in a StatefulSet derives its hostname from the name of the StatefulSet and the ordinal of the Pod. The pattern for the constructed hostname is `[STATEFULSET_NAME]-[INDEX]`. Let’s move into the Service related to the StatefulSet. If we take another look at the `db` Service defined in `sts/go-demo-3-sts.yml`, we’ll notice that it has `clusterIP` set to `None`. As a result, the Service is headless. In most cases we want Services to handle load-balancing and forward requests to one of the replicas. Load balancing is often round-robin even though it can be changed to other algorithms. However, sometimes we don’t need the Service to do load-balancing, nor we want it to provide a single IP for the Service. That is certainly true for MongoDB. If we are to convert its instances into a replica set, we need to have a separate and stable address for each. So, we disabled Service’s load-balancing by setting `spec.clusterIP` to `None`. That converted it into a Headless Service and let StatefulSet take over its algorithm. We’ll explore the effect of combining StatefulSets with Headless Services by creating a new Pod from which we can execute `nslookup` commands. ````1` kubectl -n go-demo-3 `` `2 ` run -it `` `3 ` --image busybox dns-test `` `4 ` --restart`=`Never `` `5 ` --rm sh ``````````````````````````We created a new Pod based on `busybox` inside the `go-demo-3` Namespace. We specified `sh` as the command together with the `-ti` argument that allocated a TTY and standard input (`stdin`). As a result, we are inside the container created through the `dns-test` Pod, and we can execute our first `nslookup` query. ````1` nslookup db `````````````````````````The output is as follows. ````1` `Server``:` `100.64``.``0.10` `2` `Address` `1``:` `100.64``.``0.10` `kube``-``dns``.``kube``-``system``.``svc``.``cluster``.``local` `3` `4` `Name``:` `db` `5` `Address` `1``:` `100.96``.``2.14` `db``-``0``.``db``.``go``-``demo``-``3``.``svc``.``cluster``.``local` `6` `Address` `2``:` `100.96``.``2.15` `db``-``2``.``db``.``go``-``demo``-``3``.``svc``.``cluster``.``local` `7` `Address` `3``:` `100.96``.``3.8` `db``-``1``.``db``.``go``-``demo``-``3``.``svc``.``cluster``.``local` ````````````````````````We can see that the request was picked by the `kube-dns` server and that it returned three addresses, one for each Pod in the StatefulSet. The StatefulSet is using the Headless Service to control the domain of its Pods. The domain managed by this Service takes the form of `[SERVICE_NAME].[NAMESPACE].svc.cluster.local`, where `cluster.local` is the cluster domain. However, we used a short syntax in our `nslookup` query that requires only the name of the service (`db`). Since the service is in the same Namespace, we did not need to specify `go-demo-3`. The Namespace is required only if we’d like to establish communication from one Namespace to another. When we executed `nslookup`, a request was sent to the CNAME of the Headless Service (`db`). It, in turn, returned SRV records associated with it. Those records point to A record entries that contain Pods IP addresses, one for each of the Pods managed by the StatefulSet. Let’s do `nslookup` of one of the Pods managed by the StatefulSet. The Pods can be accessed with a combination of the Pod name (e.g., `db-0`) and the name of the StatefulSet. If the Pods are in a different Namespace, we need to add it as a suffix. Finally, if we want to use the full CNAME, we can add `svc.cluster.local` as well. We can see the full address from the previous output (e.g., `db-0.db.go-demo-3.svc.cluster.local`). All in all, we can access the Pod with the index `0` as `db-0.db`, `db-0.db.go-demo-3`, or `db-0.db.go-demo-3.svc.cluster.local`. Any of the three combinations should work since we are inside the Pod running in the same Namespace. So, we’ll use the shortest version. ````1` nslookup db-0.db ```````````````````````The output is as follows. ````1` `Server``:` `100.64``.``0.10` `2` `Address` `1``:` `100.64``.``0.10` `kube``-``dns``.``kube``-``system``.``svc``.``cluster``.``local` `3` `4` `Name``:` `db``-``0``.``db` `5` `Address` `1``:` `100.96``.``2.14` `db``-``0``.``db``.``go``-``demo``-``3``.``svc``.``cluster``.``local` ``````````````````````We can see that the output matches part of the output of the previous `nslookup` query. The only difference is that this time it is limited to the particular Pod. What we got with the combination of a StatefulSet and a Headless Service is a stable network identity. Unless we change the number of replicas of this StatefulSet, CNAME records are permanent. Unlike Deployments, StatefulSets maintain sticky identities for each of their Pods. These pods are created from the same spec, but they are not interchangeable. Each has a persistent identifier that is maintained across any rescheduling. Pods ordinals, hostnames, SRV records, and A records are never changed. However, the same cannot be said for IP addresses associated with them. They might change. That is why it is crucial not to configure applications to connect to Pods in a StatefulSet by IP address. Now that we know that the Pods managed with a StatefulSet have a stable network identity, we can proceed and configure MongoDB replica set. ````1` `exit` `2` `3` kubectl -n go-demo-3 `` `4 ` `exec` -it db-0 – sh `````````````````````We exited the `dns-test` Pod and entered into one of the MongoDB containers created by the StatefulSet. ````1` `mongo` `2` `3` `rs``.``initiate``(` `{` `4` `_id` `:` `“rs0”``,` `5` `members``:` `[` `6` `{``_id``:` `0``,` `host``:` `“db-0.db:27017”``},` `7` `{``_id``:` `1``,` `host``:` `“db-1.db:27017”``},` `8` `{``_id``:` `2``,` `host``:` `“db-2.db:27017”``}` `9` `]` `10` `})` ````````````````````We entered into `mongo` Shell and initiated a ReplicaSet (`rs.initiate`). The members of the ReplicaSet are the addresses of the three Pods combined with the default MongoDB port `27017`. The output is `{ “ok” : 1 }`, thus confirming that we (probably) configured the ReplicaSet correctly. Remember that our goal is not to go deep into MongoDB configuration, but only to explore some of the benefits behind StatefulSets. If we used Deployments, we would not get stable network identity. Any update would create new Pods with new identities. With StatefulSet, on the other hand, we know that there will always be `db-[INDEX].db`, no matter how often we update it. Such a feature is mandatory when applications need to form an internal cluster (or a replica set) and were not designed to discover each other dynamically. That is indeed the case with MongoDB. We’ll confirm that the MongoDB replica set was created correctly by outputting its status. ````1` `rs``.``status``()` ```````````````````The output, limited to the relevant parts, is as follows. ````1` `…` `2` `“members”` `:` `[` `3` `{` `4` `“_id”` `:` `0``,` `5` `…` `6` `“stateStr”` `:` `“PRIMARY”``,` `7` `…` `8` `},` `9` `{` `10 ` `“_id”` `:` `1``,` `11 ` `…` `12 ` `“stateStr”` `:` `“SECONDARY”``,` `13 ` `…` `14 ` `“syncingTo”` `:` `“db-0.db:27017”``,` `15 ` `…` `16 ` `},` `17 ` `{` `18 ` `“_id”` `:` `2``,` `19 ` `…` `20 ` `“stateStr”` `:` `“SECONDARY”``,` `21 ` `…` `22 ` `“syncingTo”` `:` `“db-0.db:27017”``,` `23 ` `…` `24 ` `}` `25 ` `]``,` `26 ` `“ok”` `:` `1` `27` `}` ``````````````````We can see that all three MongoDB Pods are members of the replica set. One of them is primary. If it fails, Kubernetes will reschedule it and, since it’s managed by the StatefulSet, it’ll maintain the same stable network identity. The secondary members are all syncing with the primary one that is reachable through the `db-0.db:27017` address. Now that the database is finally operational, we should confirm that the `api` Pods are running. ````1` `exit` `2` `3` `exit` `4` `5` kubectl -n go-demo-3 get pods `````````````````We exited MongoDB Shell and the container that hosts `db-0`, and we listed the Pods in the `go-demo-3` Namespace. The output of the latter command is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` api-… 1/1 Running 8 17m `3` api-… 1/1 Running 8 17m `4` api-… 1/1 Running 8 17m `5` db-0 1/1 Running 0 17m `6` db-1 1/1 Running 0 17m `7` db-2 1/1 Running 0 16m ````````````````If, in your case, `api` Pods are still not `running`, please wait for a few moments until Kubernetes restarts them. Now that the MongoDB replica set is operational, `api` Pods could connect to it, and Kubernetes changed their statuses to `Running`. The whole application is operational. There is one more StatefulSet-specific feature we should discuss. Let’s see what happens if, for example, we update the image of the `db` container. The updated definition is in `sts/go-demo-3-sts-upd.yml`. ````1` diff sts/go-demo-3-sts.yml `` `2 ` sts/go-demo-3-sts-upd.yml ```````````````As you can see from the `diff`, the only change is in the `image`. We’ll update `mongo` version from `3.3` to `3.4`. ````1` kubectl apply `` `2 ` -f sts/go-demo-3-sts-upd.yml `` `3 ` --record `4` `5` kubectl -n go-demo-3 get pods ``````````````We applied the new definition and retrieved the list of Pods inside the Namespace. The output is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` api-… 1/1 Running 6 14m `3` api-… 1/1 Running 6 14m `4` api-… 1/1 Running 6 14m `5` db-0 1/1 Running 0 6m `6` db-1 1/1 Running 0 6m `7` db-2 0/1 ContainerCreating 0 14s `````````````We can see that the StatefulSet chose to update only one of its Pods. Moreover, it picked the one with the highest index. Let’s see the output of the same command half a minute later. ````1` NAME READY STATUS RESTARTS AGE `2` api-… 1/1 Running 6 15m `3` api-… 1/1 Running 6 15m `4` api-… 1/1 Running 6 15m `5` db-0 1/1 Running 0 7m `6` db-1 0/1 ContainerCreating 0 5s `7` db-2 1/1 Running 0 32s ````````````StatefulSet finished updating the `db-2` Pod and moved to the one before it. And so on, and so forth, all the way until all the Pods that form the StatefulSet were updated. The Pods in the StatefulSet were updated in reverse ordinal order. The StatefulSet terminated one of the Pods, and it waited for its status to become `Running` before it moved to the next one. All in all, when StatefulSet is created, it, in turn, generates Pods sequentially starting with the index `0`, and moving upwards. Updates to StatefulSets are following the same logic, except that StatefulSet begins updates with the Pod with the highest index, and it flows downwards. We did manage to make MongoDB replica set running, but the cost was too high. Creating Mongo replica set manually is not a good option. It should be no option at all. We’ll remove the `go-demo-3` Namespace (and everything inside it) and try to improve our process for deploying MongoDB. ````1` kubectl delete ns go-demo-3 ```````````### Using Sidecar Containers To Initialize Applications Even though we managed to deploy MongoDB replica set with three instances, the process was far from optimum. We had to execute manual steps. Since I don’t believe that manual hocus-pocus type of intervention is the way to go, we’ll try to improve the process by removing human interaction. We’ll do that through sidecar containers that will do the work of creating MongoDB replica set (not to be confused with Kubernetes ReplicaSet). Let’s take a look at yet another iteration of the `go-demo-3` application definition. ````1` cat sts/go-demo-3.yml ``````````The output, limited to relevant parts, is as follows. ````1` `…` `2` `apiVersion``:` `apps/v1beta2` `3` `kind``:` `StatefulSet` `4` `metadata``:` `5` `name``:` `db` `6` `namespace``:` `go-demo-3` `7` `spec``:` `8` `…` `9` `template``:` `10 ` `…` `11 ` `spec``:` `12 ` `terminationGracePeriodSeconds``:` `10` `13 ` `containers``:` `14 ` `…` `15 ` `-` `name``:` `db-sidecar` `16 ` `image``:` `cvallance/mongo-k8s-sidecar` `17 ` `env``:` `18 ` `-` `name``:` `MONGO_SIDECAR_POD_LABELS` `19 ` `value``:` `“app=db”` `20 ` `-` `name``:` `KUBE_NAMESPACE` `21 ` `value``:` `go-demo-3` `22 ` `-` `name``:` `KUBERNETES_MONGO_SERVICE_NAME` `23 ` `value``:` `db` `24` `…` `````````When compared with `sts/go-demo-3-sts.yml`, the only difference is the addition of the second container in the StatefulSet `db`. It is based on cvallance/mongo-k8s-sidecar Docker image. I won’t bore you with the details but only give you the gist of the project. It creates and maintains MongoDB replica sets. The sidecar will monitor the Pods created through our StatefulSet, and it will reconfigure `db` containers so that MongoDB replica set is (almost) always up to date with the MongoDB instances. Let’s create the resources defined in `sts/go-demo-3.yml` and check whether everything works as expected. ````1` kubectl apply `` `2 ` -f sts/go-demo-3.yml `` `3 ` --record `4` `5` `# Wait for a few moments` `6` `7` kubectl -n go-demo-3 `` `8 ` logs db-0 `` `9 ` -c db-sidecar ````````We created the resources and outputted the logs of the `db-sidecar` container inside the `db-0` Pod. The output, limited to the last entry, is as follows. ````1` `…` `2` `Error` `in` `workloop` `{` `[``Error``:` `[``object` `Object``]]` `3` `message``:` `4` `{` `kind``:` `‘``Status``’``,` `5` `apiVersion``:` `‘``v1``’``,` `6` `metadata``:` `{},` `7` `status``:` `‘``Failure``’``,` `8` `message``:` `‘``pods` `is` `forbidden``:` `User` `“system:serviceaccount:go-demo-3:default”` `can`\ `9` `not` `list` `pods` `in` `the` `namespace` `“go-demo-3”``’``,` `10 ` `reason``:` `‘``Forbidden``’``,` `11 ` `details``:` `{` `kind``:` `‘``pods``’` `},` `12 ` `code``:` `403` `},` `13 ` `statusCode``:` `403` `}` ```````We can see that the `db-sidecar` container is not allowed to list the Pods in the `go-demo-3` Namespace. If, in your case, that’s not the output you’re seeing, you might need to wait for a few moments and re-execute the `logs` command. It is not surprising that the sidecar could not list the Pods. If it could, RBAC would be, more or less, useless. It would not matter that we restrict which resources users can create if any Pod could circumvent that. Just as we learned in The DevOps 2.3 Toolkit: Kubernetes how to set up users using RBAC, we need to do something similar with service accounts. We need to extend RBAC rules from human users to Pods. That will be the subject of the next chapter. ### To StatefulSet Or Not To StatefulSet StatefulSets provide a few essential features often required when running stateful applications in a Kubernetes cluster. Still, the division between Deployments and StatefulSets is not always clear. After all, both controllers can attach a PersistentVolume, both can forward requests through Services, and both support rolling updates. When should you choose one over the other? Saying that one is for stateful applications and the other isn’t would be an oversimplification that would not fit all the scenarios. As an example, we saw that we got no tangible benefit when we moved Jenkins from a Deployment into a StatefulSet. MongoDB, on the other hand, showcases essential benefits provided by StatefulSets. We can simplify decision making with a few questions. * Does your application need stable and unique network identifiers? * Does your application need stable persistent storage? * Does your application need ordered deployments, scaling, deletion, or rolling updates? If the answer to any of those questions is yes, your application should probably be managed by a StatefulSet. Otherwise, you should probably use a Deployment. All that does not mean that there are no other controller types you can use. There are a few. However, if the choice is limited to Deployment and StatefulSet controllers, those three questions should be on your mind when choosing which one to use. ### What Now? We’re finished with the exploration of StatefulSets. They will be essential since they will enable us to do a few things that will be critical for our Continuous Deployment processes. For now, we’ll remove the resources we created and take a break before we jump into Service Accounts in an attempt to fix the problem with the MongoDB sidecar. Who knows? We might find the usage of Service Account beyond the sidecar. ````1` kubectl delete ns go-demo-3 `````We deleted the `go-demo-3` Namespace and, with it, all the resources we created. We’ll start the next chapter with a clean slate. Feel free to delete the whole cluster if you’re not planning to explore the next chapter right away. If it’s dedicated to this book, there’s no need to waste resources and money on running a cluster that is not doing anything. Before you leave, please consult the following API references for more information about StatefulSets. * StatefulSet v1beta2 apps`` ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````
第八章:通过 Service Accounts 启用与 Kube API 的进程通信
当我们(人类)尝试访问启用了 RBAC 的 Kubernetes 集群时,我们会以用户身份进行身份验证。我们的用户名提供了一个身份,API 服务器利用这个身份来判断我们是否有权限执行预定的操作。同样,容器内运行的进程也可能需要访问 API。在这种情况下,它们会作为特定的 ServiceAccount 进行身份验证。
ServiceAccounts 提供了一种机制,用于授予在容器内运行的进程权限。在许多方面,ServiceAccounts 与 RBAC 用户或组非常相似。对于人类用户,我们使用 RoleBindings 和 ClusterRoleBindings 将用户和组与 Roles 和 ClusterRoles 关联起来。而在处理进程时,主要的区别在于名称和作用范围。我们不是创建用户或组,而是创建 ServiceAccounts,并将它们绑定到角色上。然而,与可以是全局的用户不同,ServiceAccounts 仅绑定到特定的 Namespace。
我们不会再深入探讨理论,而是通过实际操作来学习 ServiceAccounts 的不同方面。
创建一个集群
我们将通过进入我们克隆的 vfarcic/k8s-specs 仓库所在的目录来开始实际操作演示。
`1` `cd` k8s-specs
`2`
`3` git pull
Next, we’ll need a cluster which we can use to experiment with ServiceAccounts. The requirements are the same as those we used in the previous chapter. We’ll need **Kubernetes version 1.9** or higher as well as **nginx Ingress Controller**, **RBAC**, and a **default StorageClass**. If you didn’t destroy it, please continue using the cluster you created in the previous chapter. Otherwise, it should be reasonably fast to create a new one. For your convenience, the Gists and the specs we used before are available here as well. * [docker4mac.sh](https://gist.github.com/06e313db2957e92b1df09fe39c249a14): **Docker for Mac** with 2 CPUs, 2GB RAM, and with **nginx Ingress**. * [minikube.sh](https://gist.github.com/536e6329e750b795a882900c09feb13b): **minikube** with 2 CPUs, 2GB RAM, and with `ingress`, `storage-provisioner`, and `default-storageclass` addons enabled. * [kops.sh](https://gist.github.com/2a3e4ee9cb86d4a5a65cd3e4397f48fd): **kops in AWS** with 3 t2.small masters and 2 t2.medium nodes spread in three availability zones, and with **nginx Ingress** (assumes that the prerequisites are set through Appendix B). * [minishift.sh](https://gist.github.com/c9968f23ecb1f7b2ec40c6bcc0e03e4f): **minishift** with 2 CPUs, 2GB RAM, and version 1.16+. * [gke.sh](https://gist.github.com/5c52c165bf9c5002fedb61f8a5d6a6d1): **Google Kubernetes Engine (GKE)** with 3 n1-standard-1 (1 CPU, 3.75GB RAM) nodes (one in each zone), and with **nginx Ingress** controller running on top of the “standard” one that comes with GKE. We’ll use nginx Ingress for compatibility with other platforms. Feel free to modify the YAML files if you prefer NOT to install nginx Ingress. * [eks.sh](https://gist.github.com/5496f79a3886be794cc317c6f8dd7083): **Elastic Kubernetes Service (EKS)** with 2 t2.medium nodes, with **nginx Ingress** controller, and with a **default StorageClass**. Now that we have a cluster, we can proceed with a few examples. ### Configuring Jenkins Kubernetes Plugin We’ll start by creating the same Jenkins StatefulSet we used in the previous chapter. Once it’s up-and-running, we’ll try to use [Jenkins Kubernetes plugin](https://github.com/jenkinsci/kubernetes-plugin). If we’re successful, we’ll have a tool which could be used to execute continuous delivery or deployment tasks inside a Kubernetes cluster. ````1` cat sa/jenkins-no-sa.yml `````````````````````````````````````````````````````````````````````````````````````````````We won’t go through the definition since it is the same as the one we used in the previous chapter. There’s no mystery that has to be revealed, so we’ll move on and create the resources defined in that YAML. ````1` kubectl apply `\` `2 ` -f sa/jenkins-no-sa.yml `\` `3 ` --record ````````````````````````````````````````````````````````````````````````````````````````````Next, we’ll wait until `jenkins` StatefulSet is rolled out. ````1` kubectl -n jenkins `\` `2 ` rollout status sts jenkins Next, we’ll discover the DNS (or IP) of the load balancer. 1` `CLUSTER_DNS``=``$(`kubectl -n jenkins `\` `2 ` get ing jenkins `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].hostname}"``)` `4` `5` `echo` `$CLUSTER_DNS` ``````````````````````````````````````````````````````````````````````````````````````````Now that we know the address of the cluster, we can proceed and open Jenkins UI in a browser. 1open"http://$CLUSTER_DNS/jenkins" `````````````````````````````````````````````````````````````````````````````````````````Now we need to go through the setup wizard. It’s a dull process, and I’m sure you’re not thrilled with the prospect of going through it. However, we’re still missing knowledge and tools that will allow us to automate the process. For now, we’ll have to do the boring part manually. The first step is to get the initial admin password. ````1 kubectl -n jenkins \ 2 exec jenkins-0 -it – \ 3 cat /var/jenkins_home/secrets/initialAdminPassword ````````````````````````````````````````````````````````````````````````````````````````Please copy the output and paste it into the Administrator password field. Click the Continue button, followed with a click to Install suggested plugins button. Fill in the Create First Admin User fields and press the Save and Finish button. Jenkins is ready, and only a click away. Please press the Start using Jenkins button. If we are to use the Kubernetes plugin, we need to install it first. We’ll do that through the available plugins section of the plugin manager screen. 1` open `"http://``$CLUSTER_DNS``/jenkins/pluginManager/available"` ```````````````````````````````````````````````````````````````````````````````````````Type *Kubernetes* in the *Filter* field and select the checkbox next to it. Since we are already in the plugin manager screen, we might just as well install BlueOcean as well. It’ll make Jenkins prettier. Type *BlueOcean* in the *Filter* field and select the checkbox next to it. Now that we selected the plugins we want, the next step is to install them. Please click the *Install without restart* button and wait until all the plugins (and their dependencies) are installed. We are not yet finished. We still need to configure the newly installed Kubernetes plugin. 1` open `“http://``KaTeX parse error: Expected 'EOF', got '#' at position 2901: …```````````````#̲## Exploring th…CLUSTER_DNS``/jenkins”` ``````````````````````````The first step is to get the initial admin password. 1` kubectl -n jenkins `\` `2 ` `exec` jenkins-0 -it -- `\` `3 ` cat /var/jenkins_home/secrets/initialAdminPassword `````````````````````````Please copy the output and paste it into the *Administrator password* field. Click *Continue*, followed by the *Install suggested plugins* button. The rest of the setup requires you to *Create First Admin User*, so please go ahead. You don’t need my help on that one. Just as before, we’ll need to add *Kubernetes* and *BlueOcean* plugins. 1` open `“http://``
C
L
U
S
T
E
R
D
N
S
‘
‘
/
j
e
n
k
i
n
s
/
p
l
u
g
i
n
M
a
n
a
g
e
r
/
a
v
a
i
l
a
b
l
e
"
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
Y
o
u
a
l
r
e
a
d
y
k
n
o
w
w
h
a
t
t
o
d
o
.
O
n
c
e
y
o
u
’
r
e
d
o
n
e
i
n
s
t
a
l
l
i
n
g
t
h
e
t
w
o
p
l
u
g
i
n
s
,
w
e
’
l
l
g
o
t
o
t
h
e
c
o
n
f
i
g
u
r
a
t
i
o
n
s
c
r
e
e
n
.
‘
‘
‘
‘
1
‘
o
p
e
n
‘
"
h
t
t
p
:
/
/
‘
‘
CLUSTER_DNS``/jenkins/pluginManager/available"` ````````````````````````You already know what to do. Once you’re done installing the two plugins, we’ll go to the configuration screen. ````1` open `"http://``
CLUSTERDNS‘‘/jenkins/pluginManager/available"‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Youalreadyknowwhattodo.Onceyou’redoneinstallingthetwoplugins,we’llgototheconfigurationscreen.‘‘‘‘1‘open‘"http://‘‘CLUSTER_DNS``/jenkins/configure”` ```````````````````````Please expand the Add a new cloud drop-down list in the Cloud section and select Kubernetes. Now that we have the ServiceAccount that grants us the required permissions, we can click the Test Connection button and confirm that it works. The output is as follows. 1` Error testing connection : Failure executing: GET at: https://kubernetes.default.svc\ `2` /api/v1/namespaces/jenkins/pods. Message: Forbidden!Configured service account doesn\ `3` 't have access. Service account may have been revoked. pods is forbidden: User "syst\ `4` em:serviceaccount:jenkins:jenkins" cannot list pods in the namespace "jenkins". ``````````````````````Did we do something wrong? We didn’t. That was the desired behavior. By not specifying a Namespace, Jenkins checked whether it has necessary permission in the Namespace where it runs. If we try to invoke Kube API from a container, it’ll always use the same Namespace as the one where the container is. Jenkins is no exception. On the other hand, our YAML explicitly defined that we should have permissions to create Pods in the `build` Namespace. Let’s fix that. Please type *build* in the *Kubernetes Namespace* field and click the *Test Connection* button again. This time the output shows that the `connection test` was `successful`. We managed to configure Jenkins’ Kubernetes plugin to operate inside the `build` Namespace. Still, there is one more thing missing. When we create a job that uses Kubernetes Pods, an additional container will be added. That container will use JNLP to establish communication with the Jenkins master. We need to specify a valid address JNLP can use to connect to the master. Since the Pods will be in the `build` Namespace and the master is in `jenkins`, we need to use the longer DNS name that specifies both the name of the service (`jenkins`) as well as the Namespace (also `jenkins`). On top of all that, our master is configured to respond to requests with the root path `/jenkins`. All the all, the full address Pods can use to communicate with Jenkins master is should be `http://[SERVICE_NAME].[NAMESPACE]/[PATH]`. Since all three of those elements are `jenkins`, the “real” address is `http://jenkins.jenkins/jenkins`. Please type it inside the *Jenkins URL* field and click the *Save* button. Now we’re ready to create a job that’ll test that everything works as expected. Please click the *New Item* link from the left-hand menu to open a screen for creating jobs. Type *my-k8s-job* in the *item name* field, select *Pipeline* as the type, and click the *OK* button. Once inside the job configuration screen, click the *Pipeline* tab and write the script that follows inside the *Pipeline Script* field. 1` `podTemplate``(` `2` `label:` `‘kubernetes’``,` `3` `containers:` `[` `4` `containerTemplate``(``name:` `‘maven’``,` `image:` `‘maven:alpine’``,` `ttyEnabled:` `true``,` `co``` `5` `mmand:` `‘cat’``),` `6` `containerTemplate``(``name:` `‘golang’``,` `image:` `‘golang:alpine’``,` `ttyEnabled:` `true``,` `` `7` `command:` `‘cat’``)` `8` `]` `9` `)` `{` `10 ` `node``(``‘kubernetes’``)` `{` `11 ` `container``(``‘maven’``)` `{` `12 ` `stage``(``‘build’``)` `{` `13 ` `sh` `‘mvn --version’` `14 ` `}` `15 ` `stage``(``‘unit-test’``)` `{` `16 ` `sh` `‘java -version’` `17 ` `}` `18 ` `}` `19 ` `container``(``‘golang’``)` `{` `20 ` `stage``(``‘deploy’``)` `{` `21 ` `sh` `‘go version’` `22 ` `}` `23 ` `}` `24 ` `}` `25` `}` `````````````````````The job is relatively simple. It uses `podTemplate` to define a `node` that will contain two containers. One of those is `golang`, and the other is `maven`. In both cases the `command` is `cat`. Without a long-running command (process `1`), the container would exit immediately, Kubernetes would detect that and start another container based on the same image. It would fail again, and the loop would continue. Without the main process running, we’d enter into a never-ending loop. Further on, we are defining that we want to use the `podTemplate` as a node and we start executing `sh` commands in different containers. Those commands only output software versions. The goal of this job is not to demonstrate a full CD pipeline (we’ll do that later), but only to prove that integration with Kubernetes works and that we can use different containers that contain the tools we need. Don’t forget to click the Save button. Now that we have a job, we should run it and validate that the integration with Kubernetes indeed works. Please click the Open Blue Ocean link from the left-hand menu followed by the Run button. We’ll let Jenkins run the build and switch to Shell to observe what’s happening. 1` kubectl -n build get pods ````````````````````After a while, the output should be as follows. 1` NAME READY STATUS RESTARTS AGE `2` jenkins-slave-… 0/3 ContainerCreating 0 11s ```````````````````We can see that Jenkins created a Pod with three containers. At this moment in time, those containers are still not fully functional. Kubernetes is probably pulling them to the assigned node. You might be wondering why are there three containers even though we specified two. Jenkins added the third to the Pod definition. It contains JNLP that is in charge of communication between Pods acting as nodes and Jenkins masters. From user’s perspective, JNLP is non-existent. It is a transparent process we do not need to worry about. Let’s take another look at the Pods in the `build` Namespace. 1` kubectl -n build get pods ``````````````````The output is as follows. 1` NAME READY STATUS RESTARTS AGE `2` jenkins-slave-… 3/3 Running 0 5s `````````````````This time, if you are a fast reader, all the containers that form the Pod are running, and Jenkins is using them to execute the instructions we defined in the job. Let’s take another look at the Pods. 1` kubectl -n build get pods ````````````````The output is as follows. 1` NAME READY STATUS RESTARTS AGE `2` jenkins-slave-… 3/3 Terminating 0 32s ```````````````Once Jenkins finished executing the instructions from the job, it issued a command to Kube API to terminate the Pod. Jenkins nodes created through `podTemplate` are called on-shot agents. Instead of having long-running nodes, they are created when needed and destroyed when not in use. Since they are Pods, Kubernetes is scheduling them on the nodes that have enough resources. By combining one-shot agents with Kubernetes, we are distributing load and, at the same time, using only the resources we need. After all, there’s no need to waste CPU and memory on non-existing processes. We’re done with Jenkins, so let’s remove the Namespaces we created before we move into the next use-case. 1` kubectl delete ns jenkins build ``````````````### Using ServiceAccounts From Side-Car Containers We still have one more pending issue that we can solve with ServiceAccounts. In the previous chapter we tried to use `cvallance/mongo-k8s-sidecar` container in hopes it’ll dynamically create and manage a MongoDB replica set. We failed because, at that time, we did not know how to create sufficient permissions that would allow the side-car to do its job. Now we know better. Let’s take a look at an updated version of our *go-demo-3* application. 1` cat sa/go-demo-3.yml `````````````The relevant parts of the output are as follows 1` `...` `2` `apiVersion``:` `v1` `3` `kind``:` `ServiceAccount` `4` `metadata``:` `5` `name``:` `db` `6` `namespace``:` `go-demo-3` `7` `8` `---` `9` `10` `kind``:` `Role` `11` `apiVersion``:` `rbac.authorization.k8s.io/v1beta1` `12` `metadata``:` `13 ` `name``:` `db` `14 ` `namespace``:` `go-demo-3` `15` `rules``:` `16` `-` `apiGroups``:` `[``""``]` `17 ` `resources``:` `[``"pods"``]` `18 ` `verbs``:` `[``"list"``]` `19` `20` `---` `21` `22` `apiVersion``:` `rbac.authorization.k8s.io/v1beta1` `23` `kind``:` `RoleBinding` `24` `metadata``:` `25 ` `name``:` `db` `26 ` `namespace``:` `go-demo-3` `27` `roleRef``:` `28 ` `apiGroup``:` `rbac.authorization.k8s.io` `29 ` `kind``:` `Role` `30 ` `name``:` `db` `31` `subjects``:` `32` `-` `kind``:` `ServiceAccount` `33 ` `name``:` `db` `34` `35` `---` `36` `37` `apiVersion``:` `apps/v1beta1` `38` `kind``:` `StatefulSet` `39` `metadata``:` `40 ` `name``:` `db` `41 ` `namespace``:` `go-demo-3` `42` `spec``:` `43 ` `...` `44 ` `template``:` `45 ` `...` `46 ` `spec``:` `47 ` `serviceAccountName``:` `db` `48 ` `...` ````````````Just as with Jenkins, we have a ServiceAccount, a Role, and a RoleBinding. Since the side-car needs only to list the Pods, the Role is this time more restrictive than the one we created for Jenkins. Further down, in the StatefulSet, we added `serviceAccountName: db` entry that links the set with the account. By now, you should be familiar with all those resources. We’re applying the same logic to the side-car as to Jenkins. Since there’s no need for a lengthy discussion, we’ll move on and `apply` the definition. 1` kubectl apply `` `2 ` -f sa/go-demo-3.yml `` `3 ` --record ```````````Next, we’ll take a look at the Pods created in the `go-demo-3` Namespace. 1` kubectl -n go-demo-3 `\` `2 ` get pods ``````````After a while, the output should be as follows. 1` NAME READY STATUS RESTARTS AGE `2` api-… 1/1 Running 1 1m `3` api-… 1/1 Running 1 1m `4` api-… 1/1 Running 1 1m `5` db-0 2/2 Running 0 1m `6` db-1 2/2 Running 0 1m `7` db-2 2/2 Running 0 54s `````````All the Pods are running so it seems that, this time, the side-car did not have trouble communicating with the API. To be on the safe side, we’ll output the logs of one of the side-car containers. 1` kubectl -n go-demo-3 `\` `2 ` logs db-0 -c db-sidecar ````````The output, limited to the last entries, is as follows. 1` `…` `2` `{` `_id:` `1,` `3` `host:` `‘db-1.db.go-demo-3.svc.cluster.local:27017’,` `4` `arbiterOnly:` `false,` `5` `buildIndexes:` `true,` `6` `hidden:` `false,` `7` `priority:` `1,` `8` `tags:` `{``}``,` `9` `slaveDelay:` `0``,` `10 ` `votes:` `1` `},` `11 ` `{` `_id:` `2,` `host:` `‘db-2.db.go-demo-3.svc.cluster.local:27017’` `}` `],` `12 ` `settings:` `13 ` `{` `chainingAllowed:` `true,` `14 ` `heartbeatIntervalMillis:` `2000,` `15 ` `heartbeatTimeoutSecs:` `10,` `16 ` `electionTimeoutMillis:` `10000,` `17 ` `catchUpTimeoutMillis:` `2000,` `18 ` `getLastErrorModes:` `{``}``,` `19 ` `getLastErrorDefaults:` `{` `w:` `1,` `wtimeout:` `0` `}``,` `20 ` `replicaSetId:` `5``aef``9e4``c``52``b``968``b``72``a``16``ea``5``b` `}` `}` ```````The details behind the output are not that important. What matters is that there are no errors. The side-car managed to retrieve the information it needs from Kube API, and all that’s left for us is to delete the Namespace and conclude the chapter. 1` kubectl delete ns go-demo-3 ### What Now? ServiceAccounts combined with Roles and RoleBindings are an essential component for continuous deployment or any other process that needs to communicate with Kubernetes. The alternative is to run an unsecured cluster which is not an option for any but smallest organizations. RBAC is required when more than one person is operating or using a cluster. If RBAC is enabled, ServiceAccounts are a must. We’ll use them a lot in the chapters that follow. Please consult the APIs that follow for any additional information about ServiceAccounts and related resources. * ServiceAccount v1 core] * Role v1 rbac * ClusterRole v1 rbac * RoleBinding v1 rbac * ClusterRoleBinding v1 rbac One more thing before you leave. Please consult jenkins-kubernetes-plugin for more information about the plugin.` ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````
第九章:定义持续部署
我们应该能够从任何地方执行大部分 CDP 步骤。开发人员应该能够在本地的 Shell 中运行它们。其他人可能希望将它们集成到他们最喜欢的 IDE 中。所有或部分 CDP 步骤的执行方式可能非常多样。将它们作为每次提交的一部分运行只是其中的一种排列方式。我们执行 CDP 步骤的方式应该与定义它们的方式无关。如果我们增加了对非常高(如果不是完全)自动化的需求,很明显这些步骤必须是简单的命令或 Shell 脚本。向其中添加其他内容可能会导致紧耦合,从而限制我们在使用工具运行这些步骤时的独立性。
本章的目标是定义持续部署过程中可能需要的最少步骤。从那里开始,具体的步骤扩展将由你来完成,以满足你在项目中可能遇到的特定用例。
一旦我们知道应该做什么,我们将继续定义实现目标所需的命令。我们会尽力以一种方式来创建 CDP 步骤,使其能够轻松地移植到其他工具中。我们会尽量避免与工具绑定。总会有一些步骤是非常特定于我们使用的工具的,但我希望这些步骤仅限于搭建框架,而不是 CDP 的逻辑。
我们是否能完全实现我们的目标,还需要观察。现在,我们暂时忽略 Jenkins 以及所有可能用于编排持续部署过程的其他工具。相反,我们将专注于 Shell 和我们需要执行的命令。虽然我们可能会写一两个脚本。
是持续交付还是持续部署?
每个人都想实现持续交付或持续部署。毕竟,这些好处太显著,无法忽视。提高交付速度,提高质量,降低成本,解放人们的时间,让他们专注于创造价值的工作,等等。这些改进对任何决策者来说就像音乐,尤其是当那个人有商业背景时。如果一个技术极客能清晰地表达持续交付带来的好处,当他向业务代表请求预算时,回应几乎总是“是的!做吧。”
到现在为止,你可能已经对持续集成、持续交付和持续部署之间的区别感到困惑,所以我会尽力带你了解每个过程背后的主要目标。
如果你有一组自动化的流程,每次提交代码到代码库时都会执行这些流程,那么你就是在做持续集成(CI)。我们通过 CI 要实现的目标是,每次提交后,能够很快验证提交内容的有效性。我们不仅要知道我们所做的是否有效,还要知道它是否与我们同事的工作集成得很好。这就是为什么每个人都需要将代码合并到主分支,或者至少合并到其他公共分支上。我们如何命名分支不那么重要,重要的是我们分叉代码后的时间间隔不能太长,可能是几小时,甚至几天。如果我们延迟集成超过这个时间,我们就有可能花费太多时间在一些会破坏其他人工作的内容上。
持续集成的问题在于自动化的水平还不够高。我们对过程的信任度不够。我们觉得它带来了好处,但我们仍然需要第二次确认。我们需要人工确认机器执行过程的结果。
持续交付(CD)是持续集成的超集。它具有在每次提交时执行的完全自动化过程。如果过程中的任何一步没有失败,我们就会宣布该提交已准备好进入生产环境。
最后,持续部署(CDP)几乎与持续交付相同。过程中的所有步骤在这两种情况下都是完全自动化的。唯一的区别是“部署到生产”按钮不再存在。
尽管从过程的角度看,CD 和 CDP 几乎是相同的,但后者可能要求我们在开发应用程序的方式上进行一些改变。例如,我们可能需要开始使用功能切换,以允许我们禁用部分完成的功能。CDP 所需的大部分变更是本来就应该采纳的。然而,在 CDP 中,这种需求被提升到了更高的水平。
我们不会深入探讨在试图达到 CDP 理想状态之前,所需要实施的所有文化和开发方面的变更。那将是另一本书的内容,并且需要比我们目前的篇幅更多的空间。我甚至不会尝试说服你接受持续部署。确实有许多有效的案例表明 CDP 并不是一个好选择,更有许多情况下,CDP 在没有大规模的文化和技术变更(这些变更超出了 Kubernetes 的范畴)时,根本无法实现。从统计学角度讲,你很可能还没有准备好接受持续部署。
到这个时候,你可能会在想,继续阅读是否有意义。也许你确实还没有准备好进行持续部署,也许你认为这只是在浪费时间。如果是这样,我想告诉你的是,这没有关系。事实上,你已经拥有了一些 Kubernetes 的经验,这让我知道你并不是一个落后者。你选择了接受一种新的工作方式。你看到了分布式系统的好处,你也接受了那些在你刚开始时看起来肯定像是疯狂的东西。
如果你已经读到这里,说明你已经准备好学习和实践接下来的流程。你可能还没准备好进行持续部署。没关系,你可以先退回到持续交付。如果这对你来说也太复杂,你可以从持续集成开始。我之所以说这没关系,是因为大多数步骤在这些情况下都是相同的。无论你打算做 CI、CD 还是 CDP,你都必须构建一些东西,必须运行一些测试,并且必须将应用程序部署到某个地方。
从技术角度来看,无论我们是部署到本地集群、专门用于测试的集群,还是生产环境,都是一样的。部署到 Kubernetes 集群(无论其目的是什么)基本上是相同的。你可能选择拥有一个集群来处理所有的事情。那也是可以的。这就是为什么我们有命名空间。你可能不完全信任你的测试。但从一开始,这也不是问题,因为我们执行测试的方式在信任程度上并没有区别。我可以继续说下去,类似的观点还很多。真正重要的是,整个过程基本相同,不管你信任它的程度如何。信任是随着时间积累的。
本书的目标是教你如何将持续部署应用到 Kubernetes 集群中。什么时候你的专业技能、文化和代码准备好进行持续部署,这由你决定。我们将构建的流水线无论是用于 CI、CD,还是 CDP,应该都是一样的。只有一些参数可能会发生变化。
总的来说,第一个目标是定义我们的持续部署流程的基本步骤。执行这些步骤的工作,我们稍后再担心。
定义持续部署目标
持续部署流程相对容易解释,尽管实施可能会有些棘手。我们将把需求分成两组。我们将从讨论整个过程中应应用的总体目标开始。更精确地说,我们将讨论我认为是不可妥协的要求。
管道需要安全。通常,这不会是问题。在 Kubernetes 诞生之前,我们会在不同的服务器上运行管道步骤。一个专门用于构建,另一个用于测试。可能有一个用于集成,另一个用于性能测试。一旦我们采用容器调度器并迁移到集群中,我们就失去了对服务器的控制。尽管可以在特定服务器上运行某些东西,但在 Kubernetes 中强烈不建议这样做。我们应该让 Kubernetes 尽可能少地限制调度 Pods。这意味着我们的构建和测试可能会在生产集群中运行,这可能并不安全。如果不小心,恶意用户可能会利用共享空间。更有可能的是,我们的测试可能包含一个不希望出现的副作用,这可能会使生产应用面临风险。
我们可以创建独立的集群。一个可以专门用于生产,另一个用于其他所有用途。虽然这是我们应该探索的一个选项,但 Kubernetes 已经提供了我们需要的工具来确保集群安全。我们拥有 RBAC、ServiceAccounts、Namespaces、PodSecurityPolicies、NetworkPolicies 和其他一些资源。因此,我们可以共享同一个集群,并在确保合理安全的同时进行操作。
安全性不是唯一的要求。即使一切都已得到保障,我们仍然需要确保我们的管道不会对集群中运行的其他应用产生负面影响。如果不小心,测试可能会请求或使用过多的资源,结果可能导致集群中的其他应用和进程内存不足。幸运的是,Kubernetes 也为这些问题提供了解决方案。我们可以将命名空间(Namespaces)与限制范围(LimitRanges)和资源配额(ResourceQuotas)结合使用。虽然它们不能完全保证不会出问题(没有什么是绝对的),但它们确实提供了一套工具,当正确使用时,可以合理保证命名空间中的进程不会“失控”。
我们的管道应该快速。如果执行时间过长,我们可能会被迫在管道执行完成之前开始处理新功能。如果管道执行失败,我们将不得不决定是停止处理新功能并承受上下文切换的代价,还是忽略这个问题直到有时间处理它。虽然这两种情况都不好,但后者最糟糕,应该尽量避免。失败的管道必须拥有最高优先级。否则,如果处理问题只是一个最终任务,那么自动化和持续过程的意义何在呢?
问题在于,我们通常无法独立完成这些目标。我们可能会被迫做出取舍。安全性通常与速度发生冲突,我们可能需要在两者之间找到平衡。
最后,最重要的目标,那是所有目标之上的目标,就是我们的持续部署管道必须在每次提交到主分支时执行。这将提供关于系统准备情况的持续反馈,并且在某种程度上,它将迫使人们经常将代码合并到主分支。当我们创建一个分支时,除非它回到主分支,或者无论哪个是生产就绪分支的名称,否则它是不存在的。合并的时间越长,我们的代码与同事工作的集成问题就越大。
既然我们已经理清了高层次的目标,现在应该将重点转向管道应该包含的具体步骤。
定义持续部署步骤
我们将尝试定义任何持续部署管道应该执行的最小步骤集。不要把它们当作字面意思来理解。每个公司不同,每个项目都有其特殊之处。你可能需要扩展这些步骤以适应你们特定的需求。然而,这不应该是问题。一旦我们掌握了那些强制执行的步骤,扩展过程应该相对简单,除非你需要与没有明确 API 或良好 CLI 的工具交互。如果是这种情况,我的建议是放弃这些工具,它们不值得我们忍受它们常常带来的折磨。
我们可以将管道分成几个阶段。我们需要构建工件(在运行静态测试和分析之后)。我们必须运行功能测试,因为单元测试是不够的。我们需要创建一个发布并将其部署到某个地方(希望是生产环境)。无论我们多么信任早期的阶段,我们确实需要运行测试以验证部署(到生产环境)是否成功。最后,我们需要在流程结束时进行清理,移除所有为管道创建的进程。将它们闲置运行是没有意义的。
总的来说,阶段如下。
-
构建阶段
-
功能测试阶段
-
发布阶段
-
部署阶段
-
生产测试阶段
-
清理阶段
计划是这样的。在构建阶段,我们将构建一个 Docker 镜像并推送到注册表(在我们的案例中是 Docker Hub)。然而,由于应该停止构建未经测试的工件,我们将在实际构建之前运行静态测试。一旦我们的 Docker 镜像被推送,我们将部署应用程序并对其进行测试。如果一切按预期工作,我们将发布一个新版本并将其部署到生产环境。为了安全起见,我们将再进行一轮测试,以验证部署是否在生产环境中确实成功。最后,我们将通过移除除生产版本之外的所有内容来清理系统。
https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00007.jpeg
图 3-1:持续部署管道的各个阶段
我们稍后将讨论每个阶段的步骤。现在,我们需要一个集群,用于实际操作练习,帮助我们更好地理解我们稍后要构建的管道。如果我们手动执行的步骤成功,那么编写管道脚本应该相对简单。
创建集群
我们将通过回到本地的vfarcic/k8s-specs仓库并拉取最新版本来开始实际操作部分。
`1` `cd` k8s-specs
`2`
`3` git pull
Just as in the previous chapters, we’ll need a cluster if we are to do the hands-on exercises. The rules are still the same. You can continue using the same cluster as before, or you can switch to a different Kubernetes flavor. You can continue using one of the Kubernetes distributions listed below, or be adventurous and try something different. If you go with the latter, please let me know how it went, and I’ll test it myself and incorporate it into the list. The Gists with the commands I used to create different variations of Kubernetes clusters are as follows. * [docker4mac-3cpu.sh](https://gist.github.com/bf08bce43a26c7299b6bd365037eb074): **Docker for Mac** with 3 CPUs, 3 GB RAM, and with **nginx Ingress**. * [minikube-3cpu.sh](https://gist.github.com/871b5d7742ea6c10469812018c308798): **minikube** with 3 CPUs, 3 GB RAM, and with `ingress`, `storage-provisioner`, and `default-storageclass` addons enabled. * [kops.sh](https://gist.github.com/2a3e4ee9cb86d4a5a65cd3e4397f48fd): **kops in AWS** with 3 t2.small masters and 2 t2.medium nodes spread in three availability zones, and with **nginx Ingress** (assumes that the prerequisites are set through Appendix B). * [minishift-3cpu.sh](https://gist.github.com/2074633688a85ef3f887769b726066df): **minishift** with 3 CPUs, 3 GB RAM, and version 1.16+. * [gke-2cpu.sh](https://gist.github.com/e3a2be59b0294438707b6b48adeb1a68): **Google Kubernetes Engine (GKE)** with 3 n1-highcpu-2 (2 CPUs, 1.8 GB RAM) nodes (one in each zone), and with **nginx Ingress** controller running on top of the “standard” one that comes with GKE. We’ll use nginx Ingress for compatibility with other platforms. Feel free to modify the YAML files if you prefer NOT to install nginx Ingress. * [eks.sh](https://gist.github.com/5496f79a3886be794cc317c6f8dd7083): **Elastic Kubernetes Service (EKS)** with 2 t2.medium nodes, with **nginx Ingress** controller, and with a **default StorageClass**. Now that we have a cluster, we can move into a more exciting part of this chapter. We’ll start defining and executing stages and steps of a continuous deployment pipeline. ### Creating Namespaces Dedicated To Continuous Deployment Processes If we are to accomplish a reasonable level of security of our pipelines, we need to run them in dedicated Namespaces. Our cluster already has RBAC enabled, so we’ll need a ServiceAccount as well. Since security alone is not enough, we also need to make sure that our pipeline does not affect other applications. We’ll accomplish that by creating a LimitRange and a ResourceQuota. I believe that in most cases we should store everything an application needs in the same repository. That makes maintenance much simpler and enables the team in charge of that application to be in full control, even though that team might not have all the permissions to create the resources in a cluster. We’ll continue using `go-demo-3` repository but, since we’ll have to change a few things, it is better if you apply the changes to your fork and, maybe, push them back to GitHub. ````1` open `"https://github.com/vfarcic/go-demo-3"` ````````````````````````````````````````````````````````If you’re not familiar with GitHub, all you have to do is to log in and click the *Fork* button located in the top-right corner of the screen. Next, we’ll remove the `go-demo-3` repository (if you happen to have it) and clone the fork. Make sure that you replace `[...]` with your GitHub username. ````1` `cd` .. `2` `3` rm -rf go-demo-3 `4` `5` `export` `GH_USER``=[`...`]` `6` `7` git clone https://github.com/`$GH_USER`/go-demo-3.git `8` `9` `cd` go-demo-3 ```````````````````````````````````````````````````````The only thing left is to edit a few files. Please open *k8s/build.yml* and *k8s/prod.yml* files in your favorite editor and change all occurrences of `vfarcic` with your Docker Hub user. The namespace dedicated for all building and testing activities of the `go-demo-3` project is defined in the `k8s/build-ns.yml` file stored in the project repository. ````1` git pull `2` `3` cat k8s/build-ns.yml The output is as follows. 1` `apiVersion``:` `v1` `2` `kind``:` `Namespace` `3` `metadata``:` `4` `name``:` `go-demo-3-build` `5` `6` `---` `7` `8` `apiVersion``:` `v1` `9` `kind``:` `ServiceAccount` `10` `metadata``:` `11 ` `name``:` `build` `12 ` `namespace``:` `go-demo-3-build` `13` `14` `---` `15` `16` `apiVersion``:` `rbac.authorization.k8s.io/v1beta1` `17` `kind``:` `RoleBinding` `18` `metadata``:` `19 ` `name``:` `build` `20 ` `namespace``:` `go-demo-3-build` `21` `roleRef``:` `22 ` `apiGroup``:` `rbac.authorization.k8s.io` `23 ` `kind``:` `ClusterRole` `24 ` `name``:` `admin` `25` `subjects``:` `26` `-` `kind``:` `ServiceAccount` `27 ` `name``:` `build` `28` `29` `---` `30` `31` `apiVersion``:` `v1` `32` `kind``:` `LimitRange` `33` `metadata``:` `34 ` `name``:` `build` `35 ` `namespace``:` `go-demo-3-build` `36` `spec``:` `37 ` `limits``:` `38 ` `-` `default``:` `39 ` `memory``:` `200Mi` `40 ` `cpu``:` `0.2` `41 ` `defaultRequest``:` `42 ` `memory``:` `100Mi` `43 ` `cpu``:` `0.1` `44 ` `max``:` `45 ` `memory``:` `500Mi` `46 ` `cpu``:` `0.5` `47 ` `min``:` `48 ` `memory``:` `10Mi` `49 ` `cpu``:` `0.05` `50 ` `type``:` `Container` `51` `52` `---` `53` `54` `apiVersion``:` `v1` `55` `kind``:` `ResourceQuota` `56` `metadata``:` `57 ` `name``:` `build` `58 ` `namespace``:` `go-demo-3-build` `59` `spec``:` `60 ` `hard``:` `61 ` `requests.cpu``:` `2` `62 ` `requests.memory``:` `3Gi` `63 ` `limits.cpu``:` `3` `64 ` `limits.memory``:` `4Gi` `65 ` `pods``:` `15` `````````````````````````````````````````````````````If you are familiar with Namespaces, ServiceAccounts, LimitRanges, and ResourceQuotas, the definition should be fairly easy to understand. We defined the `go-demo-3-build` Namespace which we’ll use for all our CDP tasks. It’ll contain the ServiceAccount `build` bound to the ClusterRole `admin`. As a result, containers running inside that Namespace will be able to do anything they want. It’ll be their playground. We also defined the LimitRange named `build`. It’ll make sure to give sensible defaults to the Pods that running in the Namespace. That way we can create Pods from which we’ll build and test without worrying whether we forgot to specify resources they need. After all, most of us do not know how much memory and CPU a build needs. The same LimitRange also contains some minimum and maximum limits that should prevent users from specifying too small or too big resource reservations and limits. Finally, since the capacity of our cluster is probably not unlimited, we defined a ResourceQuota that specifies the total amount of memory and CPU for requests and limits in that Namespace. We also defined that the maximum number of Pods running in that Namespace cannot be higher than fifteen. If we do have more Pods than what we can place in that Namespace, some will be pending until others finish their work and resources are liberated. It is very likely that the team behind the project will not have sufficient permissions to create new Namespaces. If that’s the case, the team would need to let cluster administrator know about the existence of that YAML. In turn, he (or she) would review the definition and create the resources, once he (or she) deduces that they are safe. For the sake of simplicity, you are that person, so please execute the command that follows. 1kubectl apply` 2 -f k8s/build-ns.yml \ 3 --record ````````````````````````````````````````````````````As you can see from the output, the go-demo-3-build Namespace was created together with a few other resources. Now that we have a Namespace dedicated to the lifecycle of our application, we’ll create another one that to our production release. 1` cat k8s/prod-ns.yml ```````````````````````````````````````````````````The `go-demo-3` Namespace is very similar to `go-demo-3-build`. The major difference is in the RoleBinding. Since we can assume that processes running in the `go-demo-3-build` Namespace will, at some moment, want to deploy a release to production, we created the RoleBinding `build` which binds to the ServiceAccount `build` in the Namespace `go-demo-3-build`. We’ll `apply` this definition while still keeping our cluster administrator’s hat. 1` kubectl apply `` `2 ` -f k8s/prod-ns.yml `` `3 ` --record ``````````````````````````````````````````````````Now we have two Namespaces dedicated to the `go-demo-3` application. We are yet to figure out which tools we’ll need for our continuous deployment pipeline. ### Defining A Pod With The Tools Every application is different, and the tools we need for a continuous deployment pipeline vary from one case to another. For now, we’ll focus on those we’ll need for our go-demo-3 application. Since the application is written in Go, we’ll need `golang` image to download the dependencies and run the tests. We’ll have to build Docker images, so we should probably add a `docker` container as well. Finally, we’ll have to execute quite a few `kubectl` commands. For those of you using OpenShift, we’ll need `oc` as well. All in all, we need a Pod with `golang`, `docker`, `kubectl`, and (for some of you) `oc`. The go-demo-3 repository already contains a definition of a Pod with all those containers, so let’s take a closer look at it. 1` cat k8s/cd.yml `````````````````````````````````````````````````The output is as follows. 1` `apiVersion:` `v1` `2` `kind:` `Pod` `3` `metadata:` `4` `name:` `cd` `5` `namespace:` `go-demo-3-build` `6` `spec:` `7` `containers:` `8` `-` `name:` `docker` `9` `image:` `docker:18.03-git` `10 ` `command:` `["sleep"]` `11 ` `args:` `[“100000”]` `12 ` `volumeMounts:` `13 ` `-` `name:` `workspace` `14 ` `mountPath:` `/workspace` `15 ` `-` `name:` `docker-socket` `16 ` `mountPath:` `/var/run/docker.sock` `17 ` `workingDir:` `/workspace` `18 ` `-` `name:` `kubectl` `19 ` `image:` `vfarcic/kubectl` `20 ` `command:` `["sleep"]` `21 ` `args:` `[“100000”]` `22 ` `volumeMounts:` `23 ` `-` `name:` `workspace` `24 ` `mountPath:` `/workspace` `25 ` `workingDir:` `/workspace` `26 ` `-` `name:` `oc` `27 ` `image:` `vfarcic/openshift-client` `28 ` `command:` `["sleep"]` `29 ` `args:` `[“100000”]` `30 ` `volumeMounts:` `31 ` `-` `name:` `workspace` `32 ` `mountPath:` `/workspace` `33 ` `workingDir:` `/workspace` `34 ` `-` `name:` `golang` `35 ` `image:` `golang:1.9` `36 ` `command:` `["sleep"]` `37 ` `args:` `[“100000”]` `38 ` `volumeMounts:` `39 ` `-` `name:` `workspace` `40 ` `mountPath:` `/workspace` `41 ` `workingDir:` `/workspace` `42 ` `serviceAccount:` `build` `43 ` `volumes:` `44 ` `-` `name:` `docker-socket` `45 ` `hostPath:` `46 ` `path:` `/var/run/docker.sock` `47 ` `type:` `Socket` `48 ` `-` `name:` `workspace` `49 ` `emptyDir:` `{}` ````````````````````````````````````````````````Most of the YAML defines the containers based on images that contain the tools we need. What makes it special is that all the containers have the same mount called `workspace`. It maps to `/workspace` directory inside containers, and it uses `emptyDir` volume type. We’ll accomplish two things with those volumes. On the one hand, all the containers will have a shared space so the artifacts generated through the actions we will perform in one will be available in the other. On the other hand, since `emptyDir` volume type exists only just as long as the Pod is running, it’ll be deleted when we remove the Pod. As a result, we won’t be leaving unnecessary garbage on our nodes or external drives. To simplify the things and save us from typing `cd /workspace`, we set `workingDir` to all the containers. Unlike most of the other Pods we usually run in our clusters, those dedicated to CDP processes are short lived. They are not supposed to exist for a long time nor should they leave any trace of their existence once they finish executing the steps we are about to define. The ability to run multiple containers on the same node and with a shared file system and networking will be invaluable in our quest to define continuous deployment processes. If you were ever wondering what the purpose of having Pods as entities that envelop multiple containers is, the steps we are about to explore will hopefully provide a perfect use-case. Let’s create the Pod. ````1` kubectl apply -f k8s/cd.yml --record ```````````````````````````````````````````````Pleases confirm that all the containers of the Pod are up and running by executing `kubectl -n go-demo-3-build get pods`. You should see that `4/4` are `ready`. Now we can start working on our continuous deployment pipeline steps. ### Executing Continuous Integration Inside Containers The first stage in our continuous deployment pipeline will contain quite a few steps. We’ll need to check out the code, to run unit tests and any other static analysis, to build a Docker image, and to push it to the registry. If we define continuous integration (CI) as a set of automated steps followed with manual operations and validations, we can say that the steps we are about to execute can be qualified as CI. The only thing we truly need to make all those steps work is Docker client with the access to Docker server. One of the containers of the `cd` Pod already contains it. If you take another look at the definition, you’ll see that we are mounting Docker socket so that the Docker client inside the container can issue commands to Docker server running on the host. Otherwise, we would be running Docker-in-Docker, and that is not a very good idea. Now we can enter the `docker` container and check whether Docker client can indeed communicate with the server. ````1` kubectl -n go-demo-3-build `\` `2 ` `exec` -it `cd` -c docker -- sh `3` `4` docker container ls ``````````````````````````````````````````````Once inside the `docker` container, we executed `docker container ls` only as a proof that we are using a client inside the container which, in turn, uses Docker server running on the node. The output is the list of the containers running on top of one of our servers. Let’s get moving and execute the first step. We cannot do much without the code of our application, so the first step is to clone the repository. Make sure that you replace `[...]` with your GitHub username in the command that follows. ````1` `export` `GH_USER=[`…`]` `2` `3` git clone `` `4 ` https://github.com/`KaTeX parse error: Expected group as argument to '\`' at position 27: …-demo-3.git `\` ̲`5 ` . ````…DH_USER` ````````````````````````````````````````````Once you enter your password, you should see the `Login Succeeded` message. We are about to execute the most critical step of this stage. We’ll build an image. At this moment you might be freaking out. You might be thinking that I went insane. A Pastafarian and a firm believer that nothing should be built without running tests first just told you to build an image as the first step after cloning the code. Sacrilege! However, this Dockerfile is special, so let’s take a look at it. ````1` cat Dockerfile ```````````````````````````````````````````The output is as follows. ````1` FROM golang:1.9 AS build `2` ADD . /src `3` WORKDIR /src `4` RUN go get -d -v -t `5` RUN go test --cover -v ./… --run UnitTest `6` RUN go build -v -o go-demo `7` `8` `9` FROM alpine:3.4 `10` MAINTAINER Viktor Farcic viktor@farcic.com `11` RUN mkdir /lib64 && ln -s /lib/libc.musl-x86_64.so.1 /lib64/ld-linux-x86-64.so.2 `12` EXPOSE 8080 `13` ENV DB db `14` CMD [“go-demo”] `15` COPY --from=build /src/go-demo /usr/local/bin/go-demo `16` RUN chmod +x /usr/local/bin/go-demo ``````````````````````````````````````````Normally, we’d run a container, in this case, based on the `golang` image, execute a few processes, store the binary into a directory that was mounted as a volume, exit the container, and build a new image using the binary created earlier. While that would work fairly well, multi-stage builds allow us to streamline the processes into a single `docker image build` command. If you’re not following Docker releases closely, you might be wondering what a multi-stage build is. It is a feature introduced in Docker 17.05 that allows us to specify multiple `FROM` statements in a Dockerfile. Each `FROM` instruction can use a different base, and each starts a new stage of the build process. Only the image created with the last `FROM` segment is kept. As a result, we can specify all the steps we need to execute before building the image without increasing its size. In our example, we need to execute a few Go commands that will download all the dependencies, run unit tests, and compile a binary. Therefore, we specified `golang` as the base image followed with the `RUN` instruction that does all the heavy lifting. Please note that the first `FROM` statement is named `build`. We’ll see why that matters soon. Further down, we start over with a new `FROM` section that uses `alpine`. It is a very minimalist linux distribution (a few MB in size) that guarantees that our final image is minimal and is not cluttered with unnecessary tools that are typically used in “traditional” Linux distros like `ubuntu`, `debian`, and `centos`. Further down we are creating everything our application needs, like the `DB` environment variable used by the code to know where the database is, the command that should be executed when a container starts, and so on. The critical part is the `COPY` statement. It copies the binary we created in the `build` stage into the final image. Let’s build the image. ````1` docker image build `` `2 ` -t `KaTeX parse error: Expected group as argument to '\`' at position 32: …-3:1.0-beta `\` ̲`3 ` . ````…DH_USER`/go-demo-3:1.0-beta ``````````````````````````````````````The image is in the registry and ready for further deployments and testing. Mission accomplished. We’re doing continuous integration manually. If we’d place those few commands into a CI/CD tool, we would have the first part of the process up and running. https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00008.jpeg Figure 3-2: The build stage of a continuous deployment pipeline We are still facing a few problems. Docker running in a Kubernetes cluster might be too old. It might not support all the features we need. As an example, most of the Kubernetes distributions before 1.10 supported Docker versions older than 17.05. If that’s not enough, consider the possibility that you might not even use Docker in a Kubernetes cluster. It is very likely that ContainerD will be the preferable container engine in the future, and that is only one of many choices we can select. The point is that container engine in a Kubernetes cluster should be in charge of running container, and not much more. There should be no need for the nodes in a Kubernetes cluster to be able to build images. Another issue is security. If we allow containers to mount Docker socket, we are effectively allowing them to control all the containers running on that node. That by itself makes security departments freak out, and for a very good reason. Also, don’t forget that we logged into the registry. Anyone on that node could push images to the same registry without the need for credentials. Even if we do log out, there was still a period when everyone could exploit the fact that Docker server is authenticated and authorized to push images. Truth be told, we are not preventing anyone from mounting a Docker socket. At the moment, our policy is based on trust. That should change with PodSecurityPolicy. However, security is not the focus of this book, so I’ll assume that you’ll set up the policies yourself, if you deem them worthy of your time. If that’s not enough, there’s also the issue of preventing Kubernetes to do its job. The moment we adopt container schedulers, we accept that they are in charge of scheduling all the processes running inside the cluster. If we start doing things behind their backs, we might end up messing with their scheduling capabilities. Everything we do without going through Kube API is unknown to Kubernetes. We could use Docker inside Docker. That would allow us to build images inside containers without reaching out to Docker socket on the nodes. However, that requires privileged access which poses as much of a security risk as mounting a Docker socket. Actually, it is even riskier. So, we need to discard that option as well. Another solution might be to use kaniko. It allows us to build Docker images from inside Pods. The process is done without Docker so there is no dependency on Docker socket nor there is a need to run containers in privileged mode. However, at the time of this writing (May 2018) kaniko is still not ready. It is complicated to use, and it does not support everything Docker does (e.g., multi-stage builds), it’s not easy to decipher its logs (especially errors), and so on. The project will likely have a bright future, but it is still not ready for prime time. Taking all this into consideration, the only viable option we have, for now, is to build our Docker images outside our cluster. The steps we should execute are the same as those we already run. The only thing missing is to figure out how to create a build server and hook it up to our CI/CD tool. We’ll revisit this subject later on. For now, we’ll exit the container. ````1` `exit` `````````````````````````````````````Let’s move onto the next stage of our pipeline. ### Running Functional Tests Which steps do we need to execute in the functional testing phase? We need to deploy the new release of the application. Without it, there would be nothing to test. All the static tests were already executed when we built the image, so everything we do from now on will need a live application. Deploying the application is not enough, we’ll have to validate that at least it rolled out successfully. Otherwise, we’ll have to abort the process. We’ll have to be cautious how we deploy the new release. Since we’ll run it in the same cluster as production, we need to be careful that one does not affect the other. We already have a Namespace that provides some level of isolation. However, we’ll have to be attentive not to use the same path or domain in Ingress as the one used for production. The two need to be accessible separately from each other until we are confident that the new release meets all the quality standards. Finally, once the new release is running, we’ll execute a set of tests that will validate it. Please note that we will run functional tests only. You should translate that into “in this stage, I run all kinds of tests that require a live application.” You might want to add performance and integration tests as well. From the process point of view, it does not matter which tests you run. What matters is that in this stage you run all those that could not be executed statically when we built the image. If any step in this stage fails, we need to be prepared to destroy everything we did and leave the cluster in the same state as before we started this stage. We’ll postpone exploration of rollback steps until one of the next chapters. I’m sure you know how to do it anyway. If you don’t, I’ll leave you feeling ashamed until the next chapter. As you probably guessed, we’ll need to go into the `kubectl` container for at least some of the steps in this stage. It is already running as part of the `cd` Pod. Remember, we are performing a manual simulation of a CDP pipeline. We must assume that everything will be executed from inside the cluster, not from your laptop. ````1` kubectl -n go-demo-3-build `` `2 ` `exec` -it `cd` -c kubectl – sh ````````````````````````````````````The project contains separate definitions for deploying test and production releases. For now, we are interested only in prior which is defined in `k8s/build.yml`. ````1` cat k8s/build.yml ```````````````````````````````````We won’t comment on all the resources defined in that YAML since they are very similar to those we used before. Instead, we’ll take a quick look at the differences between a test and a production release. ````1` diff k8s/build.yml k8s/prod.yml ``````````````````````````````````The two are almost the same. One is using `go-demo-3-build` Namespace while the other works with `go-demo-3`. The `path` of the Ingress resource also differs. Non-production releases will be accessible through `/beta/demo` and thus provide separation from the production release accessible through `/demo`. Everything else is the same. It’s a pity that we had to create two separate YAML files only because of a few differences (Namespace and Ingress). We’ll discuss the challenges behind rapid deployments using standard YAML files later. For now, we’ll roll with what we have. Even though we separated production and non-production releases, we still need to modify the tag of the image on the fly. The alternative would be to change release numbers with each commit, but that would represent a burden to developers and a likely source of errors. So, we’ll go back to exercising “magic” with `sed`. ````1` cat k8s/build.yml `|` sed -e `` `2 ` `“s@:latest@:1.0-beta@g”` `|` `` `3 ` tee /tmp/build.yml `````````````````````````````````We output the contents of the `/k8s/build.yml` file, we modified it with `sed` so that the `1.0-beta` tag is used instead of the `latest`, and we stored the output in `/tmp/build.yml`. Now we can deploy the new release. ````1` kubectl apply `` `2 ` -f /tmp/build.yml --record `3` `4` kubectl rollout status deployment api ````````````````````````````````We applied the new definition and waited until it rolled out. Even though we know that the rollout was successful by reading the output, we cannot rely on such methods when we switch to full automation of the pipeline. Fortunately, the `rollout status` command will exit with `0` if everything is OK, and with a different code if it’s not. Let’s check the exit code of the last command. ````1` `echo` `
?
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
T
h
e
o
u
t
p
u
t
i
s
‘
0
‘
t
h
u
s
c
o
n
f
i
r
m
i
n
g
t
h
a
t
t
h
e
r
o
l
l
o
u
t
w
a
s
s
u
c
c
e
s
s
f
u
l
.
I
f
i
t
w
a
s
a
n
y
t
h
i
n
g
e
l
s
e
,
w
e
’
d
n
e
e
d
t
o
r
o
l
l
b
a
c
k
o
r
,
e
v
e
n
b
e
t
t
e
r
,
q
u
i
c
k
l
y
f
i
x
t
h
e
p
r
o
b
l
e
m
a
n
d
r
o
l
l
f
o
r
w
a
r
d
.
T
h
e
o
n
l
y
t
h
i
n
g
m
i
s
s
i
n
g
i
n
t
h
i
s
s
t
a
g
e
i
s
t
o
r
u
n
t
h
e
t
e
s
t
s
.
H
o
w
e
v
e
r
,
b
e
f
o
r
e
w
e
d
o
t
h
a
t
,
w
e
n
e
e
d
t
o
f
i
n
d
o
u
t
t
h
e
a
d
d
r
e
s
s
t
h
r
o
u
g
h
w
h
i
c
h
t
h
e
a
p
p
l
i
c
a
t
i
o
n
c
a
n
b
e
a
c
c
e
s
s
e
d
.
‘
‘
‘
‘
1
‘
‘
A
D
D
R
‘
‘
=
‘
‘
?` ```````````````````````````````The output is `0` thus confirming that the rollout was successful. If it was anything else, we’d need to roll back or, even better, quickly fix the problem and roll forward. The only thing missing in this stage is to run the tests. However, before we do that, we need to find out the address through which the application can be accessed. ````1` `ADDR``=``
?‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Theoutputis‘0‘thusconfirmingthattherolloutwassuccessful.Ifitwasanythingelse,we’dneedtorollbackor,evenbetter,quicklyfixtheproblemandrollforward.Theonlythingmissinginthisstageistorunthetests.However,beforewedothat,weneedtofindouttheaddressthroughwhichtheapplicationcanbeaccessed.‘‘‘‘1‘‘ADDR‘‘=‘‘(`kubectl -n go-demo-3-build `` `2 ` get ing api `` `3 ` -o `jsonpath=“{.status.loadBalancer.ingress[0].hostname}”)`/beta `4` `5` `echo` `$ADDR` `|` tee /workspace/addr `6` `7` `exit` ``````````````````````````````We retrieved the `hostname` from Ingress with the appended path (`/beta`) dedicated to beta releases. Further on, we stored the result in the `/workspace/addr` file. That way we’ll be able to retrieve it from other containers running in the same Pod. Finally, we exited the container since the next steps will require a different one. Let’s go inside the `golang` container. We’ll need it to execute functional tests. ````1` kubectl -n go-demo-3-build `\` `2 ` `exec` -it `cd` -c golang -- sh `````````````````````````````Before we run the functional tests, we’ll send a request to the application manually. That will give us confidence that everything we did so far works as expected. ````1` curl `"http://
(
‘
c
a
t
a
d
d
r
‘
)
‘
‘
/
d
e
m
o
/
h
e
l
l
o
"
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
W
e
c
o
n
s
t
r
u
c
t
e
d
t
h
e
a
d
d
r
e
s
s
u
s
i
n
g
t
h
e
i
n
f
o
r
m
a
t
i
o
n
w
e
s
t
o
r
e
d
i
n
t
h
e
‘
a
d
d
r
‘
f
i
l
e
a
n
d
s
e
n
t
a
‘
c
u
r
l
‘
r
e
q
u
e
s
t
.
T
h
e
o
u
t
p
u
t
i
s
‘
h
e
l
l
o
,
w
o
r
l
d
!
‘
,
t
h
u
s
c
o
n
f
i
r
m
i
n
g
t
h
a
t
t
h
e
t
e
s
t
r
e
l
e
a
s
e
o
f
a
p
p
l
i
c
a
t
i
o
n
s
e
e
m
s
t
o
b
e
d
e
p
l
o
y
e
d
c
o
r
r
e
c
t
l
y
.
T
h
e
t
e
s
t
s
r
e
q
u
i
r
e
a
f
e
w
d
e
p
e
n
d
e
n
c
i
e
s
,
s
o
w
e
’
l
l
d
o
w
n
l
o
a
d
t
h
e
m
u
s
i
n
g
t
h
e
‘
g
o
g
e
t
‘
c
o
m
m
a
n
d
.
D
o
n
’
t
w
o
r
r
y
i
f
y
o
u
’
r
e
n
e
w
t
o
G
o
.
T
h
i
s
e
x
e
r
c
i
s
e
i
s
n
o
t
a
i
m
e
d
a
t
t
e
a
c
h
i
n
g
y
o
u
h
o
w
t
o
w
o
r
k
w
i
t
h
i
t
,
b
u
t
o
n
l
y
t
o
s
h
o
w
y
o
u
t
h
e
p
r
i
n
c
i
p
l
e
s
t
h
a
t
a
p
p
l
y
t
o
a
l
m
o
s
t
a
n
y
l
a
n
g
u
a
g
e
.
I
n
y
o
u
r
h
e
a
d
,
y
o
u
c
a
n
r
e
p
l
a
c
e
t
h
e
c
o
m
m
a
n
d
t
h
a
t
f
o
l
l
o
w
s
w
i
t
h
‘
m
a
v
e
n
‘
t
h
i
s
,
‘
g
r
a
d
l
e
‘
t
h
a
t
,
‘
n
p
m
‘
w
h
a
t
e
v
e
r
.
‘
‘
‘
‘
1
‘
g
o
g
e
t
−
d
−
v
−
t
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
T
h
e
t
e
s
t
s
e
x
p
e
c
t
t
h
e
e
n
v
i
r
o
n
m
e
n
t
v
a
r
i
a
b
l
e
‘
A
D
D
R
E
S
S
‘
t
o
t
e
l
l
t
h
e
m
w
h
e
r
e
t
o
f
i
n
d
t
h
e
a
p
p
l
i
c
a
t
i
o
n
u
n
d
e
r
t
e
s
t
,
s
o
o
u
r
n
e
x
t
s
t
e
p
i
s
t
o
d
e
c
l
a
r
e
i
t
.
‘
‘
‘
‘
1
‘
‘
e
x
p
o
r
t
‘
‘
A
D
D
R
E
S
S
‘
‘
=
‘
a
p
i
:
8080
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
I
n
t
h
i
s
c
a
s
e
,
w
e
c
h
o
s
e
t
o
a
l
l
o
w
t
h
e
t
e
s
t
s
t
o
c
o
m
m
u
n
i
c
a
t
e
w
i
t
h
t
h
e
a
p
p
l
i
c
a
t
i
o
n
t
h
r
o
u
g
h
t
h
e
s
e
r
v
i
c
e
c
a
l
l
e
d
‘
a
p
i
‘
.
N
o
w
w
e
’
r
e
r
e
a
d
y
t
o
e
x
e
c
u
t
e
t
h
e
t
e
s
t
s
.
‘
‘
‘
‘
1
‘
g
o
‘
t
e
s
t
‘
.
/
.
.
.
−
v
−
−
r
u
n
F
u
n
c
t
i
o
n
a
l
T
e
s
t
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
T
h
e
o
u
t
p
u
t
i
s
a
s
f
o
l
l
o
w
s
.
‘
‘
‘
‘
1
‘
=
=
=
R
U
N
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
‘
2
‘
=
=
=
R
U
N
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
/
T
e
s
t
H
e
l
l
o
R
e
t
u
r
n
s
S
t
a
t
u
s
200
‘
3
‘
2018
/
05
/
1414
:
41
:
25
S
e
n
d
i
n
g
a
r
e
q
u
e
s
t
t
o
h
t
t
p
:
/
/
a
p
i
:
8080
/
d
e
m
o
/
h
e
l
l
o
‘
4
‘
=
=
=
R
U
N
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
/
T
e
s
t
P
e
r
s
o
n
R
e
t
u
r
n
s
S
t
a
t
u
s
200
‘
5
‘
2018
/
05
/
1414
:
41
:
25
S
e
n
d
i
n
g
a
r
e
q
u
e
s
t
t
o
h
t
t
p
:
/
/
a
p
i
:
8080
/
d
e
m
o
/
p
e
r
s
o
n
‘
6
‘
−
−
−
P
A
S
S
:
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
(
0.03
s
)
‘
7
‘
−
−
−
P
A
S
S
:
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
/
T
e
s
t
H
e
l
l
o
R
e
t
u
r
n
s
S
t
a
t
u
s
200
(
0.01
s
)
‘
8
‘
−
−
−
P
A
S
S
:
T
e
s
t
F
u
n
c
t
i
o
n
a
l
T
e
s
t
S
u
i
t
e
/
T
e
s
t
P
e
r
s
o
n
R
e
t
u
r
n
s
S
t
a
t
u
s
200
(
0.01
s
)
‘
9
‘
P
A
S
S
‘
10
‘
o
k
/
g
o
/
g
o
−
d
e
m
o
−
30.129
s
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
W
e
c
a
n
s
e
e
t
h
a
t
t
h
e
t
e
s
t
s
p
a
s
s
e
d
a
n
d
w
e
c
a
n
c
o
n
c
l
u
d
e
t
h
a
t
t
h
e
a
p
p
l
i
c
a
t
i
o
n
i
s
a
s
t
e
p
c
l
o
s
e
r
t
o
w
a
r
d
s
p
r
o
d
u
c
t
i
o
n
.
I
n
a
r
e
a
l
−
w
o
r
l
d
s
i
t
u
a
t
i
o
n
,
y
o
u
’
d
r
u
n
o
t
h
e
r
t
y
p
e
s
o
f
t
e
s
t
s
o
r
m
a
y
b
e
b
u
n
d
l
e
t
h
e
m
a
l
l
t
o
g
e
t
h
e
r
.
T
h
e
l
o
g
i
c
i
s
s
t
i
l
l
t
h
e
s
a
m
e
.
W
e
d
e
p
l
o
y
e
d
t
h
e
a
p
p
l
i
c
a
t
i
o
n
u
n
d
e
r
t
e
s
t
w
h
i
l
e
l
e
a
v
i
n
g
p
r
o
d
u
c
t
i
o
n
i
n
t
a
c
t
,
a
n
d
w
e
v
a
l
i
d
a
t
e
d
t
h
a
t
i
t
b
e
h
a
v
e
s
a
s
e
x
p
e
c
t
e
d
.
W
e
a
r
e
r
e
a
d
y
t
o
m
o
v
e
o
n
.
T
e
s
t
i
n
g
a
n
a
p
p
l
i
c
a
t
i
o
n
t
h
r
o
u
g
h
t
h
e
s
e
r
v
i
c
e
a
s
s
o
c
i
a
t
e
d
w
i
t
h
i
t
i
s
a
g
o
o
d
i
d
e
a
,
i
f
f
o
r
s
o
m
e
r
e
a
s
o
n
w
e
a
r
e
n
o
t
a
l
l
o
w
e
d
t
o
e
x
p
o
s
e
i
t
t
o
t
h
e
o
u
t
s
i
d
e
w
o
r
l
d
t
h
r
o
u
g
h
I
n
g
r
e
s
s
.
I
f
t
h
e
r
e
i
s
n
o
s
u
c
h
r
e
s
t
r
i
c
t
i
o
n
,
e
x
e
c
u
t
i
n
g
t
h
e
t
e
s
t
s
t
h
r
o
u
g
h
a
D
N
S
w
h
i
c
h
p
o
i
n
t
s
t
o
a
n
e
x
t
e
r
n
a
l
l
o
a
d
b
a
l
a
n
c
e
r
,
w
h
i
c
h
f
o
r
w
a
r
d
s
t
o
t
h
e
I
n
g
r
e
s
s
s
e
r
v
i
c
e
o
n
o
n
e
o
f
t
h
e
w
o
r
k
e
r
n
o
d
e
s
,
a
n
d
f
r
o
m
t
h
e
r
e
l
o
a
d
b
a
l
a
n
c
e
s
t
o
o
n
e
o
f
t
h
e
r
e
p
l
i
c
a
s
,
i
s
m
u
c
h
c
l
o
s
e
r
t
o
h
o
w
o
u
r
u
s
e
r
s
a
c
c
e
s
s
t
h
e
a
p
p
l
i
c
a
t
i
o
n
.
U
s
i
n
g
t
h
e
“
r
e
a
l
”
e
x
t
e
r
n
a
l
l
y
a
c
c
e
s
s
i
b
l
e
a
d
d
r
e
s
s
i
s
a
b
e
t
t
e
r
o
p
t
i
o
n
w
h
e
n
t
h
a
t
i
s
p
o
s
s
i
b
l
e
,
s
o
w
e
’
l
l
c
h
a
n
g
e
o
u
r
‘
A
D
D
R
E
S
S
‘
v
a
r
i
a
b
l
e
a
n
d
e
x
e
c
u
t
e
t
h
e
t
e
s
t
s
o
n
e
m
o
r
e
t
i
m
e
.
‘
‘
‘
‘
1
‘
‘
e
x
p
o
r
t
‘
‘
A
D
D
R
E
S
S
‘
‘
=
‘
‘
(`cat addr`)``/demo/hello"` ````````````````````````````We constructed the address using the information we stored in the `addr` file and sent a `curl` request. The output is `hello, world!`, thus confirming that the test release of application seems to be deployed correctly. The tests require a few dependencies, so we’ll download them using the `go get` command. Don’t worry if you’re new to Go. This exercise is not aimed at teaching you how to work with it, but only to show you the principles that apply to almost any language. In your head, you can replace the command that follows with `maven` this, `gradle` that, `npm` whatever. ````1` go get -d -v -t ```````````````````````````The tests expect the environment variable `ADDRESS` to tell them where to find the application under test, so our next step is to declare it. ````1` `export` `ADDRESS``=`api:8080 ``````````````````````````In this case, we chose to allow the tests to communicate with the application through the service called `api`. Now we’re ready to execute the tests. ````1` go `test` ./... -v --run FunctionalTest `````````````````````````The output is as follows. ````1` === RUN TestFunctionalTestSuite `2` === RUN TestFunctionalTestSuite/Test_Hello_ReturnsStatus200 `3` 2018/05/14 14:41:25 Sending a request to http://api:8080/demo/hello `4` === RUN TestFunctionalTestSuite/Test_Person_ReturnsStatus200 `5` 2018/05/14 14:41:25 Sending a request to http://api:8080/demo/person `6` --- PASS: TestFunctionalTestSuite (0.03s) `7` --- PASS: TestFunctionalTestSuite/Test_Hello_ReturnsStatus200 (0.01s) `8` --- PASS: TestFunctionalTestSuite/Test_Person_ReturnsStatus200 (0.01s) `9` PASS `10` ok _/go/go-demo-3 0.129s ````````````````````````We can see that the tests passed and we can conclude that the application is a step closer towards production. In a real-world situation, you’d run other types of tests or maybe bundle them all together. The logic is still the same. We deployed the application under test while leaving production intact, and we validated that it behaves as expected. We are ready to move on. Testing an application through the service associated with it is a good idea,if for some reason we are not allowed to expose it to the outside world through Ingress. If there is no such restriction, executing the tests through a DNS which points to an external load balancer, which forwards to the Ingress service on one of the worker nodes, and from there load balances to one of the replicas, is much closer to how our users access the application. Using the “real” externally accessible address is a better option when that is possible, so we’ll change our `ADDRESS` variable and execute the tests one more time. ````1` `export` `ADDRESS``=``
(‘cataddr‘)‘‘/demo/hello"‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Weconstructedtheaddressusingtheinformationwestoredinthe‘addr‘fileandsenta‘curl‘request.Theoutputis‘hello,world!‘,thusconfirmingthatthetestreleaseofapplicationseemstobedeployedcorrectly.Thetestsrequireafewdependencies,sowe’lldownloadthemusingthe‘goget‘command.Don’tworryifyou’renewtoGo.Thisexerciseisnotaimedatteachingyouhowtoworkwithit,butonlytoshowyoutheprinciplesthatapplytoalmostanylanguage.Inyourhead,youcanreplacethecommandthatfollowswith‘maven‘this,‘gradle‘that,‘npm‘whatever.‘‘‘‘1‘goget−d−v−t‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Thetestsexpecttheenvironmentvariable‘ADDRESS‘totellthemwheretofindtheapplicationundertest,soournextstepistodeclareit.‘‘‘‘1‘‘export‘‘ADDRESS‘‘=‘api:8080‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Inthiscase,wechosetoallowtheteststocommunicatewiththeapplicationthroughtheservicecalled‘api‘.Nowwe’rereadytoexecutethetests.‘‘‘‘1‘go‘test‘./...−v−−runFunctionalTest‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Theoutputisasfollows.‘‘‘‘1‘===RUNTestFunctionalTestSuite‘2‘===RUNTestFunctionalTestSuite/TestHelloReturnsStatus200‘3‘2018/05/1414:41:25Sendingarequesttohttp://api:8080/demo/hello‘4‘===RUNTestFunctionalTestSuite/TestPersonReturnsStatus200‘5‘2018/05/1414:41:25Sendingarequesttohttp://api:8080/demo/person‘6‘−−−PASS:TestFunctionalTestSuite(0.03s)‘7‘−−−PASS:TestFunctionalTestSuite/TestHelloReturnsStatus200(0.01s)‘8‘−−−PASS:TestFunctionalTestSuite/TestPersonReturnsStatus200(0.01s)‘9‘PASS‘10‘ok/go/go−demo−30.129s‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Wecanseethatthetestspassedandwecanconcludethattheapplicationisastepclosertowardsproduction.Inareal−worldsituation,you’drunothertypesoftestsormaybebundlethemalltogether.Thelogicisstillthesame.Wedeployedtheapplicationundertestwhileleavingproductionintact,andwevalidatedthatitbehavesasexpected.Wearereadytomoveon.Testinganapplicationthroughtheserviceassociatedwithitisagoodidea,ifforsomereasonwearenotallowedtoexposeittotheoutsideworldthroughIngress.Ifthereisnosuchrestriction,executingtheteststhroughaDNSwhichpointstoanexternalloadbalancer,whichforwardstotheIngressserviceononeoftheworkernodes,andfromthereloadbalancestooneofthereplicas,ismuchclosertohowourusersaccesstheapplication.Usingthe“real”externallyaccessibleaddressisabetteroptionwhenthatispossible,sowe’llchangeour‘ADDRESS‘variableandexecutethetestsonemoretime.‘‘‘‘1‘‘export‘‘ADDRESS‘‘=‘‘(`cat addr`)` `2` `3` go `test` ./… -v --run FunctionalTest ```````````````````````We’re almost finished with this stage. The only thing left is to exit the `golang` container, go back to `kubectl`, and remove the application under test. ````1` `exit` `2` `3` kubectl -n go-demo-3-build `` `4 ` `exec` -it `cd` -c kubectl – sh `5` `6` kubectl delete `` `7 ` -f /workspace/k8s/build.yml ``````````````````````We exited the `golang` container and entered into `kubectl` to delete the test release. https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00009.jpeg Figure 3-3: The functional testing stage of a continuous deployment pipeline Let’s take a look at what’s left in the Namespace. ````1` kubectl -n go-demo-3-build get all `````````````````````The output is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` po/cd 4/4 Running 0 11m ````````````````````Our `cd` Pod is still running. We will remove it later when we’re confident that we don’t need any of the tools it contains. There’s no need for us to stay inside the `kubectl` container anymore, so we’ll exit. ````1` `exit` ```````````````````### Creating Production Releases We are ready to create our first production release. We trust our tests, and they proved that it is relatively safe to deploy to production. Since we cannot deploy to air, we need to create a production release first. Please make sure to replace `[…]` with your Docker Hub user in one of the commands that follow. ````1` kubectl -n go-demo-3-build `` `2` `exec` -it `cd` -c docker – sh `3` `4` `export` `DH_USER=[`...`]` `5` `6` docker image tag `\` `7` `$DH_USER`/go-demo-3:1.0-beta `\` `8` `$DH_USER`/go-demo-3:1.0 `9` `10` docker image push `\` `11 ` `$DH_USER`/go-demo-3:1.0 ``````````````````We went back to the `docker` container, we tagged the `1.0-beta` release as `1.0`, and we pushed it to the registry (in this case Docker Hub). Both commands should take no time to execute since we already have all the layers cashed in the registry. We’ll repeat the same process, but this time with the `latest` tag. ````1` docker image tag `\` `2 ` `$DH_USER`/go-demo-3:1.0-beta `\` `3 ` `$DH_USER`/go-demo-3:latest `4` `5` docker image push `\` `6 ` `$DH_USER`/go-demo-3:latest `7` `8` `exit` `````````````````Now we have the same image tagged and pushed to the registry as `1.0-beta`, `1.0`, and `latest`. You might be wondering why we have three tags. They are all pointing to the same image, but they serve different purposes. The `1.0-beta` is a clear indication that the image might not have been tested and might not be ready for prime. That’s why we intentionally postponed tagging until this point. It would be simpler if we tagged and pushed everything at once when we built the image. However, that would send a wrong message to those using our images. If one of the steps failed during the pipeline, it would be an indication that the commit is not ready for production. As a result, if we pushed all tags at once, others might have decided to use `1.0` or `latest` without knowing that it is faulty. We should always be explicit with versions we are deploying to production, so the `1.0` tag is what we’ll use. That will help us control what we have and debug problems if they occur. However, others might not want to use explicit versions. A developer might want to deploy the last stable version of an application created by a different team. In those cases, developers might not care which version is in production. In such a case, deploying `latest` is probably a good idea, assuming that we take good care that it (almost) always works. <https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00010.jpeg> Figure 3-4: The release stage of a continuous deployment pipeline We’re making significant progress. Now that we have a new release, we can proceed and execute rolling updates against production. ### Deploying To Production We already saw that `prod.yml` is almost the same as `build.yml` we deployed earlier, so there’s probably no need to go through it in details. The only substantial difference is that we’ll create the resources in the `go-demo-3` Namespace, and that we’ll leave Ingress to its original path `/demo`. ````1` kubectl -n go-demo-3-build `\` `2 ` `exec` -it `cd` -c kubectl -- sh `3` `4` cat k8s/prod.yml `\` `5 ` `|` sed -e `"s@:latest@:1.0@g"` `\` `6 ` `|` tee /tmp/prod.yml `7` `8` kubectl apply -f /tmp/prod.yml --record ````````````````We used `sed` to convert `latest` to the tag we built a short while ago, and we applied the definition. This was the first release, so all the resources were created. Subsequent releases will follow the rolling update process. Since that is something Kubernetes does out-of-the-box, the command will always be the same. Next, we’ll wait until the release rolls out before we check the exit code. ````1` kubectl -n go-demo-3 `\` `2 ` rollout status deployment api `3` `4` `echo` `$?` ```````````````The exit code is `0`, so we can assume that the rollout was successful. There’s no need even to look at the Pods. They are almost certainly running. Now that the production release is up-and-running, we should find the address through which we can access it. Excluding the difference in the Namespace, the command for retrieving the hostname is the same. ````1` `ADDR=$(`kubectl -n go-demo-3 `\` `2 ` get ing api `\` `3 ` -o `jsonpath="{.status.loadBalancer.ingress[0].hostname}")` `4` `5` `echo` `KaTeX parse error: Expected 'EOF', got '#' at position 421: … `````````````#̲## Running Prod…(`cat prod-addr`)` ````````````Now that we have the address required for the tests, we can go ahead and execute them. ````1` go `test` ./… -v --run ProductionTest ```````````The output of the command is as follows. ````1` === RUN TestProductionTestSuite `2` === RUN TestProductionTestSuite/Test_Hello_ReturnsStatus200 `3` — PASS: TestProductionTestSuite (0.10s) `4 ` — PASS: TestProductionTestSuite/Test_Hello_ReturnsStatus200 (0.01s) `5` PASS `6` ok _/go/go-demo-3 0.107s ``````````https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00012.jpeg Figure 3-6: The production testing stage of a continuous deployment pipeline Production tests were successful, and we can conclude that the deployment was successful as well. All that’s left is to exit the container before we clean up. ````1` `exit` `````````### Cleaning Up Pipeline Leftovers The last step in our manually-executed pipeline is to remove all the resources we created, except the production release. Since they are all Pods in the same Namespace, that should be reasonably easy. We can remove them all from `go-demo-3-build`. ````1` kubectl -n go-demo-3-build `` `2 ` delete pods --all ````````The output is as follows. ````1` pod “cd” deleted ```````https://github.com/OpenDocCN/freelearn-devops-pt4-zh/raw/master/docs/dop-24-tlkt/img/00013.jpeg Figure 3-7: The cleanup stage of a continuous deployment pipeline That’s it. Our continuous pipeline is finished. Or, to be more precise, we defined all the steps of the pipeline. We are yet to automate everything. ### Did We Do It? We only partially succeeded in defining our continuous deployment stages. We did manage to execute all the necessary steps. We cloned the code, we run unit tests, and we built the binary and the Docker image. We deployed the application under test without affecting the production release, and we run functional tests. Once we confirmed that the application works as expected, we updated production with the new release. The new release was deployed through rolling updates but, since it was the first release, we did not see the effect of it. Finally, we run another round of tests to confirm that rolling updates were successful and that the new release is integrated with the rest of the system. You might be wondering why I said that “we only partially succeeded.” We executed the full pipeline. Didn’t we? One of the problems we’re facing is that our process can run only a single pipeline for an application. If another commit is pushed while our pipeline is in progress, it would need to wait in a queue. We cannot have a separate Namespace for each build since we’d need to have cluster-wide permissions to create Namespaces and that would defy the purpose of having RBAC. So, the Namespaces need to be created in advance. We might create a few Namespaces for building and testing, but that would still be sub-optimal. We’ll stick with a single Namespace with the pending task to figure out how to deploy multiple revisions of an application in the same Namespace given to us by the cluster administrator. Another problem is the horrifying usage of `sed` commands to modify the content of a YAML file. There must be a better way to parametrize definition of an application. We’ll try to solve that problem in the next chapter. Once we start running multiple builds of the same application, we’ll need to figure out how to remove the tools we create as part of our pipeline. Commands like `kubectl delete pods --all` will obviously not work if we plan to run multiple pipelines in parallel. We’ll need to restrict the removal only to the Pods spin up by the build we finished, not all those in a Namespace. CI/CD tools we’ll use later might be able to help with this problem. We are missing quite a few steps in our pipeline. Those are the issues we will not try to fix in this book. Those that we explored so far are common to almost all pipelines. We always run different types of tests, some of which are static (e.g., unit tests), while others need a live application (e.g., functional tests). We always need to build a binary or package our application. We need to build an image and deploy it to one or more locations. The rest of the steps differs from one case to another. You might want to send test results to SonarQube, or you might choose to make a GitHub release. If your images can be deployed to different operating systems (e.g., Linux, Windows, ARM), you might want to create a manifest file. You’ll probably run some security scanning as well. The list of the things you might do is almost unlimited, so I chose to stick with the steps that are very common and, in many cases, mandatory. Once you grasp the principles behind a well defined, fully automated, and container-based pipeline executed on top of a scheduler, I’m sure you won’t have a problem extending our examples to fit your particular needs. How about building Docker images? That is also one of the items on our TODO list. We shouldn’t build them inside Kubernetes cluster because mounting Docker socket is a huge security risk and because we should not run anything without going through Kube API. Our best bet, for now, is to build them outside the cluster. We are yet to discover how to do that effectively. I suspect that will be a very easy challenge. One message I tried to convey is that everything related to an application should be in the same repository. That applies not only to the source code and tests, but also to build scripts, Dockerfile, and Kubernetes definitions. Outside of that application-related repository should be only the code and configurations that transcends a single application (e.g., cluster setup). We’ll continue using the same separation throughout the rest of the book. Everything required by `go-demo-3` will be in the vfarcic/go-demo-3 repository. Cluster-wide code and configuration will continue living in vfarcic/k8s-specs. The logic behind everything-an-application-needs-is-in-a-single-repository mantra is vital if we want to empower the teams to be in charge of their applications. It’s up to those teams to choose how to do something, and it’s everyone else’s job to teach them the skills they need. With some other tools, such approach would pose a big security risk and could put other teams in danger. However, Kubernetes provides quite a lot of tools that can help us to avoid those risks without sacrificing autonomy of the teams in charge of application development. We have RBAC and Namespaces. We have ResourceQuotas, LimitRanges, PodSecurityPolicies, NetworkPolicies, and quite a few other tools at our disposal. ### What Now? We’re done, for now. Please destroy the cluster if you’re not planning to jump to the next chapter right away and if it is dedicated to the exercises in this book. Otherwise, execute the command that follows to remove everything we did. ````1` kubectl delete ns `` `2 ` go-demo-3 go-demo-3-build ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````
第十章:打包 Kubernetes 应用
到目前为止,我们遇到了不少挑战。好消息是,我们成功地解决了其中大部分问题。坏消息是,在某些情况下,我们的解决方案感觉并不完美(政治正确地说就是糟糕)。
在我们进行大规模部署有状态应用这一章节时,我们花了一些时间来定义 Jenkins 资源。那是一次很好的练习,可以被视为一种学习经历,但我们仍然需要做一些工作才能让它成为一个真正有用的定义。我们定义 Jenkins 的主要问题是,它仍然没有自动化。我们可以启动一个主节点,但仍然需要手动通过设置向导。一旦设置完成,我们还需要安装一些插件,并且需要更改其配置。在我们走这条路之前,我们可能需要先探索一下是否有其他人已经为我们完成了这项工作。如果我们要找一个 Java 库来帮助我们解决应用中的某个问题,我们可能会寻找一个 Maven 仓库。也许 Kubernetes 应用也有类似的东西。也许有一个由社区维护的仓库,里面包含了常用工具的安装解决方案。我们将把找到这样的地方作为我们的任务。
我们面临的另一个问题是自定义我们的 YAML 文件。至少,每次部署新版本时,我们都需要指定不同的镜像标签。在定义持续部署章节中,我们必须使用sed来修改定义,然后通过kubectl将其发送到 Kube API。虽然这样能工作,但我相信你会同意,像sed -e "s@:latest@:1.7@g"这样的命令并不是很直观。它们看起来和感觉都很笨拙。更复杂的是,镜像标签通常不是从一个部署到另一个部署之间唯一变化的内容。我们可能需要更改 Ingress 控制器的域名或路径,以适应将应用程序部署到不同环境(例如,暂存环境和生产环境)的需求。相同的情况也适用于副本数以及许多定义我们要安装内容的其他因素。使用串联的sed命令可能很快变得复杂,而且不太友好。没错,我们可以每次例如发布新版本时修改 YAML 文件。我们也可以为每个我们计划使用的环境创建不同的定义。但是,我们不会那样做。那样只会导致重复和维护上的噩梦。我们已经有了两个 YAML 文件用于go-demo-3应用程序(一个用于测试,另一个用于生产)。如果我们继续这样下去,最终可能会有十个、二十个,甚至更多版本的相同定义。我们可能还会被迫在每次代码提交时更改它,以确保标签始终是最新的。那条路不是我们要走的路,它通向悬崖。我们需要的是一个模板机制,允许我们在将定义发送到 Kube API 之前进行修改。
本章我们要尝试解决的最后一个问题是描述我们的应用程序及其他人在将其安装到集群之前可能进行的更改。说实话,这已经是可以做到的。任何人都可以读取我们的 YAML 文件来推断出应用程序的组成。任何人都可以拿着我们的 YAML 文件并修改它,以适应他们自己的需求。在某些情况下,即使是有 Kubernetes 经验的人也可能觉得有挑战性。然而,我们的主要关注点是那些不是 Kubernetes 专家的人员。我们不能指望我们组织中的每个人都花一年的时间学习 Kubernetes,只有这样他们才能部署应用程序。另一方面,我们确实希望为每个人提供这种能力。我们希望赋能每个人。当我们面临每个人都需要使用 Kubernetes,而并非每个人都将成为 Kubernetes 专家的现实时,我们显然需要一种更具描述性、易于自定义且更用户友好的方式来发现和部署应用程序。
我们将在本章中尝试解决这些问题以及一些其他问题。我们将尽力找到一个地方,在这里社区可以为常用应用(例如,Jenkins)提供定义。我们将寻求一种模板机制,使我们能够在安装应用之前进行定制。最后,我们将尝试找到一种更好的方式来记录我们的定义。我们将尽力简化到即使是不懂 Kubernetes 的人也能安全地将应用部署到集群中。我们需要的是一个 Kubernetes 版的包管理器,类似于apt、yum、apk、Homebrew 或 Chocolatey,并结合能够以任何人都能使用的方式来记录我们的包。
我会帮你省去寻找解决方案的麻烦,直接揭示答案。我们将探索 Helm,作为让我们部署变得可定制和用户友好的关键。如果幸运的话,它甚至可能是那个能帮助我们避免为常用应用重新发明轮子的解决方案。
在我们继续之前,我们需要一个集群。是时候让我们动手了。
创建集群
又到动手操作的时刻了。我们需要回到本地的 vfarcic/k8s-specs 仓库并拉取最新版本。
`1` `cd` k8s-specs
`2`
`3` git pull
Just as in the previous chapters, we’ll need a cluster if we are to execute hands-on exercises. The rules are still the same. You can continue using the same cluster as before, or you can switch to a different Kubernetes flavor. You can keep using one of the Kubernetes distributions listed below, or be adventurous and try something different. If you go with the latter, please let me know how it went, and I’ll test it myself and incorporate it into the list. Cluster requirements in this chapter are the same as in the previous. We’ll need at least 3 CPUs and 3 GB RAM if running a single-node cluster, and slightly more if those resources are spread across multiple nodes. For your convenience, the Gists and the specs we used in the previous chapter are available here as well. * [docker4mac-3cpu.sh](https://gist.github.com/bf08bce43a26c7299b6bd365037eb074): **Docker for Mac** with 3 CPUs, 3 GB RAM, and with nginx Ingress. * [minikube-3cpu.sh](https://gist.github.com/871b5d7742ea6c10469812018c308798): **minikube** with 3 CPUs, 3 GB RAM, and with `ingress`, `storage-provisioner`, and `default-storageclass` addons enabled. * [kops.sh](https://gist.github.com/2a3e4ee9cb86d4a5a65cd3e4397f48fd): **kops in AWS** with 3 t2.small masters and 2 t2.medium nodes spread in three availability zones, and with nginx Ingress (assumes that the prerequisites are set through Appendix B). * [minishift-3cpu.sh](https://gist.github.com/2074633688a85ef3f887769b726066df): **minishift** with 3 CPUs, 3 GB RAM, and version 1.16+. * [gke-2cpu.sh](https://gist.github.com/e3a2be59b0294438707b6b48adeb1a68): **Google Kubernetes Engine (GKE)** with 3 n1-highcpu-2 (2 CPUs, 1.8 GB RAM) nodes (one in each zone), and with nginx Ingress controller running on top of the “standard” one that comes with GKE. We’ll use nginx Ingress for compatibility with other platforms. Feel free to modify the YAML files if you prefer NOT to install nginx Ingress. * [eks.sh](https://gist.github.com/5496f79a3886be794cc317c6f8dd7083): **Elastic Kubernetes Service (EKS)** with 2 t2.medium nodes, with **nginx Ingress** controller, and with a **default StorageClass**. With a cluster up-and-running, we can proceed with an introduction to Helm. ### What Is Helm? I will not explain about Helm. I won’t even give you the elevator pitch. I’ll only say that it is a project with a big and healthy community, that it is a member of [Cloud Native Computing Foundation (CNCF)](https://www.cncf.io/), and that it has the backing of big guys like Google, Microsoft, and a few others. For everything else, you’ll need to follow the exercises. They’ll lead us towards an understanding of the project, and they will hopefully help us in our goal to refine our continuous deployment pipeline. The first step is to install it. ### Installing Helm Helm is a client/server type of application. We’ll start with a client. Once we have it running, we’ll use it to install the server (Tiller) inside our newly created cluster. The Helm client is a command line utility responsible for the local development of Charts, managing repositories, and interaction with the Tiller. Tiller server, on the other hand, runs inside a Kubernetes cluster and interacts with Kube API. It listens for incoming requests from the Helm client, combines Charts and configuration values to build a release, installs Charts and tracks subsequent releases, and is in charge of upgrading and uninstalling Charts through interaction with Kube API. I’m sure that this brief explanation is more confusing than helpful. Worry not. Everything will be explained soon through examples. For now, we’ll focus on installing Helm and Tiller. If you are a **MacOS user**, please use [Homebrew](https://brew.sh/) to install Helm. The command is as follows. ````1` brew install kubernetes-helm `````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````If you are a **Windows user**, please use [Chocolatey](https://chocolatey.org/) to install Helm. The command is as follows. ````1` choco install kubernetes-helm ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````Finally, if you are neither Windows nor MacOS user, you must be running **Linux**. Please go to the [releases](https://github.com/kubernetes/helm/releases) page, download `tar.gz` file, unpack it, and move the binary to `/usr/local/bin/`. If you already have Helm installed, please make sure that it is newer than 2.8.2\. That version, and probably a few versions before, was failing on Docker For Mac/Windows. Once you’re done installing (or upgrading) Helm, please execute `helm help` to verify that it is working. We are about to install *Tiller*. It’ll run inside our cluster. Just as `kubectl` is a client that communicates with Kube API, `helm` will propagate our wishes to `tiller` which, in turn, will issue requests to Kube API. It should come as no surprise that Tiller will be yet another Pod in our cluster. As such, you should already know that we’ll need a ServiceAccount that will allow it to establish communication with Kube API. Since we hope to use Helm for all our installation in Kubernetes, we should give that ServiceAccount very generous permissions across the whole cluster. Let’s take a look at the definition of a ServiceAccount we’ll create for Tiller. ````1` cat helm/tiller-rbac.yml The output is as follows. 1` `apiVersion``:` `v1` `2` `kind``:` `ServiceAccount` `3` `metadata``:` `4` `name``:` `tiller` `5` `namespace``:` `kube-system` `6` `7` `---` `8` `9` `apiVersion``:` `rbac.authorization.k8s.io/v1beta1` `10` `kind``:` `ClusterRoleBinding` `11` `metadata``:` `12 ` `name``:` `tiller` `13` `roleRef``:` `14 ` `apiGroup``:` `rbac.authorization.k8s.io` `15 ` `kind``:` `ClusterRole` `16 ` `name``:` `cluster-admin` `17` `subjects``:` `18 ` `-` `kind``:` `ServiceAccount` `19 ` `name``:` `tiller` `20 ` `namespace``:` `kube-system` ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````Since by now you are an expert in ServiceAccounts, there should be no need for a detailed explanation of the definition. We’re creating a ServiceAccount called `tiller` in the `kube-system` Namespace, and we are assigning it ClusterRole `cluster-admin`. In other words, the account will be able to execute any operation anywhere inside the cluster. You might be thinking that having such broad permissions might seem dangerous, and you would be right. Only a handful of people should have the user permissions to operate inside `kube-system` Namespace. On the other hand, we can expect much wider circle of people being able to use Helm. We’ll solve that problem later in one of the next chapters. For now, we’ll focus only on how Helm works, and get back to the permissions issue later. Let’s create the ServiceAccount. 1kubectl create` 2 -f helm/tiller-rbac.yml \ 3 --record --save-config `````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We can see from the output that both the ServiceAccount and the ClusterRoleBinding were created. Now that we have the ServiceAccount that gives Helm full permissions to manage any Kubernetes resource, we can proceed and install Tiller. 1` helm init --service-account tiller `2` `3` kubectl -n kube-system `\` `4 ` rollout status deploy tiller-deploy ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We used `helm init` to create the server component called `tiller`. Since our cluster uses RBAC and all the processes require authentication and permissions to communicate with Kube API, we added `--service-account tiller` argument. It’ll attach the ServiceAccount to the `tiller` Pod. The latter command waits until the Deployment is rolled out. We could have specified `--tiller-namespace` argument to deploy it to a specific Namespace. That ability will come in handy in one of the next chapters. For now, we omitted that argument, so Tiller was installed in the `kube-system` Namespace by default. To be on the safe side, we’ll list the Pods to confirm that it is indeed running. 1` kubectl -n kube-system get pods ```````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````The output, limited to the relevant parts, is as follows. 1` NAME READY STATUS RESTARTS AGE `2` ... `3` tiller-deploy-... 1/1 Running 0 59s ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````Helm already has a single repository pre-configured. For those of you who just installed Helm for the first time, the repository is up-to-date. On the other hand, if you happen to have Helm from before, you might want to update the repository references by executing the command that follows. 1` helm repo update `````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````The only thing left is to search for our favorite application hoping that it is available in the Helm repository. 1` helm search ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````The output, limited to the last few entries, is as follows. 1` … `2` stable/weave-scope 0.9.2 1.6.5 A Helm chart for the Weave Scope cluster visual… `3` stable/wordpress 1.0.7 4.9.6 Web publishing platform for building blogs and … `4` stable/zeppelin 1.0.1 0.7.2 Web-based notebook that enables data-driven, in… `5` stable/zetcd 0.1.9 0.0.3 CoreOS zetcd Helm chart for Kubernetes ```````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We can see that the default repository already contains quite a few commonly used applications. It is the repository that contains the official Kubernetes Charts which are carefully curated and well maintained. Later on, in one of the next chapters, we’ll add more repositories to our local Helm installation. For now, we just need Jenkins, which happens to be one of the official Charts. I already mentioned Charts a few times. You’ll find out what they are soon. For now, all you should know is that a Chart defines everything an application needs to run in a Kubernetes cluster. ### Installing Helm Charts The first thing we’ll do is to confirm that Jenkins indeed exists in the official Helm repository. We could do that by executing `helm search` (again) and going through all the available Charts. However, the list is pretty big and growing by the day. We’ll filter the search to narrow down the output. 1` helm search jenkins ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````The output is as follows. 1` NAME CHART VERSION APP VERSION DESCRIPTION \ `2 ` `3` stable/jenkins 0.16.1 2.107 Open source continuous integration server. \ `4` It s… `````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We can see that the repository contains `stable/jenkins` chart based on Jenkins version 2.107. We’ll install Jenkins with the default values first. If that works as expected, we’ll try to adapt it to our needs later on. Now that we know (through `search`) that the name of the Chart is `stable/jenkins`, all we need to do is execute `helm install`. 1` helm install stable/jenkins `\` `2 ` --name jenkins `\` `3 ` --namespace jenkins ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We instructed Helm to install `stable/jenkins` with the name `jenkins`, and inside the Namespace also called `jenkins`. The output is as follows. 1` `NAME:` `jenkins` `2` `LAST` `DEPLOYED:` `Sun` `May` `…` `3` `NAMESPACE:` `jenkins` `4` `STATUS:` `DEPLOYED` `5` `6` `RESOURCES:` `7` `==>` `v1/Service` `8` `NAME` `TYPE` `CLUSTER-IP` `EXTERNAL-IP` `PORT(S)` `AGE` `9` `jenkins-agent` `ClusterIP` `10.111.123.174` `<none>` `50000/TCP` `1s` `10` `jenkins` `LoadBalancer` `10.110.48.57` `localhost` `8080:31294/TCP` `0s` `11` `12` `>` `v1beta1/Deployment` `13` `NAME` `DESIRED` `CURRENT` `UP-TO-DATE` `AVAILABLE` `AGE` `14` `jenkins` `1` `1` `1` `0` `0s` `15` `16` `==>` `v1/Pod(related)` `17` `NAME` `READY` `STATUS` `RESTARTS` `AGE` `18` `jenkins-...` `0/1 Init:0/1` `0` `0s` `19` `20` `>` `v1/Secret` `21` `NAME` `TYPE` `DATA` `AGE` `22` `jenkins` `Opaque` `2` `1s` `23` `24` `==>` `v1/ConfigMap` `25` `NAME` `DATA` `AGE` `26` `jenkins` `4` `1s` `27` `jenkins-tests` `1` `1s` `28` `29` `==>` `v1/PersistentVolumeClaim` `30` `NAME` `STATUS` `VOLUME` `CAPACITY` `ACCESS` `MODES` `STORAGECLASS` `AGE` `31` `jenkins` `Bound` `pvc-…` `8Gi` `RWO` `gp2` `1s` `32` `33` `34` `NOTES:` `35` `1.` `Get` `your` `‘admin’` `user` `password` `by` `running:` `36 ` `printf` `$(kubectl` `get` `secret` `--namespace` `jenkins` `jenkins` `-o` `jsonpath="{.data.jenkin\` `37` `s-admin-password}"` `|` `base64` `--decode);echo` `38` `2.` `Get` `the` `Jenkins` `URL` `to` `visit` `by` `running` `these` `commands` `in` `the` `same` `shell:` `39 ` `NOTE:` `It` `may` `take` `a` `few` `minutes` `for` `the` `LoadBalancer` `IP` `to` `be` `available.` `40 ` `You` `can` `watch` `the` `status` `of` `by` `running` `‘kubectl get svc --namespace jenkins ` `41` `-w jenkins’` `42 ` `export` `SERVICE_IP=KaTeX parse error: Expected group as argument to '\`' at position 88: …ate` `"{{ ran\` ̲`43` `ge (index…SERVICE_IP:8080/login` `45` `46` `3.` `Login` `with` `the` `password` `from` `step` `1` `and` `the` `username:` `admin` `47` `48` `For` `more` `information` `on` `running` `Jenkins` `on` `Kubernetes,` `visit:` `49` `https://cloud.google.com/solutions/jenkins-on-container-engine` ```````````````````````````````````````````````````````````````````````````````````````````````````````````````````````At the top of the output, we can see some general information like the name we gave to the installed Chart (`jenkins`), when it was deployed, what the Namespace is, and the status. Below the general information is the list of the installed resources. We can see that the Chart installed two services; one for the master and the other for the agents. Below is the Deployment and the Pod. It also created a Secret that holds the administrative username and password. We’ll use it soon. Further on, we can see that it created two ConfigMaps. One (`jenkins`) holds all the configurations Jenkins might need. Later on, when we customize it, the data in this ConfigMap will reflect those changes. The second ConfigMap (`jenkins-tests`) is, at the moment, used only to provide a command used for executing liveness and readiness probes. Finally, we can see that a PersistentVolumeClass was created as well, thus making our Jenkins fault tolerant without losing its state. Don’t worry if you feel overwhelmed. We’ll do a couple of iterations of the Jenkins installation process, and that will give us plenty of opportunities to explore this Chart in more details. If you are impatient, please `describe` any of those resources to get more insight into what’s installed. One thing worthwhile commenting right away is the type of the `jenkins` Service. It is, by default, set to `LoadBalancer`. We did not explore that type in The DevOps 2.3 Toolkit: Kubernetes, primarily because the book is, for the most part, based on minikube. On cloud providers which support external load balancers, setting the type field to `LoadBalancer` will provision an external load balancer for the Service. The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer is published in the Service’s `status.loadBalancer` field. When a Service is of the `LoadBalancer` type, it publishes a random port just as if it is the `NodePort` type. The additional feature is that it also communicates that change to the external load balancer (LB) which, in turn, should open a port as well. In most cases, the port opened in the external LB will be the same as the Service’s `TargetPort`. For example, if the `TargetPort` of a Service is `8080` and the published port is `32456`, the external LB will be configured to accept traffic on the port `8080`, and it will forward traffic to one of the healthy nodes on the port `32456`. From there on, requests will be picked up by the Service and the standard process of forwarding it further towards the replicas will be initiated. From user’s perspective, it seems as if the published port is the same as the `TargetPort`. The problem is that not all load balancers and hosting vendors support the `LoadBalancer` type, so we’ll have to change it to `NodePort` in some of the cases. Those changes will be outlined as notes specific to the Kubernetes flavor. Going back to the Helm output… At the bottom of the output, we can see the post-installation instructions provided by the authors of the Chart. In our case, those instructions tell us how to retrieve the administrative password from the Secret, how to open Jenkins in a browser, and how to log in. Next, we’ll wait until `jenkins` Deployment is rolled out. 1` kubectl -n jenkins `\` `2 ` rollout status deploy jenkins ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````````We are almost ready to open Jenkins in a browser. But, before we do that, we need to retrieve the hostname (or IP) through which we can access our first Helm install. 1` `ADDR=KaTeX parse error: Expected group as argument to '\`' at position 25: … -n jenkins `\` ̲`2 ` get svc…ADDR` ````````````````````````````````````````````````````````````````````````````````````````````````````````````````````The format of the output will differ from one Kubernetes flavor to another. In case of AWS with kops, it should be similar to the one that follows. 1` ...us-east-2.elb.amazonaws.com ```````````````````````````````````````````````````````````````````````````````````````````````````````````````````Now we can finally open Jenkins. We won’t do much with it. Our goal, for now, is only to confirm that it is up-and-running. 1` open `“http://$ADDR”` ``````````````````````````````````````````````````````````````````````````````````````````````````````````````````You should be presented with the login screen. There is no setup wizard indicating that this Helm chart already configured Jenkins with some sensible default values. That means that, among other things, the Chart created a user with a password during the automatic setup. We need to discover it. Fortunately, we already saw from the `helm install` output that we should retrieve the password by retrieving the `jenkins-admin-password` entry from the `jenkins` secret. If you need to refresh your memory, please scroll back to the output, or ignore it all together and execute the command that follows. 1` kubectl -n jenkins `\` `2 ` get secret jenkins `\` `3 ` -o `jsonpath``=``"{.data.jenkins-admin-password}"` `\` `4 ` `|` base64 --decode`;` `echo` `````````````````````````````````````````````````````````````````````````````````````````````````````````````````The output should be a random set of characters similar to the one that follows. 1` shP7Fcsb9g ````````````````````````````````````````````````````````````````````````````````````````````````````````````````Please copy the output and return to Jenkins` login screen in your browser. Type admin into the User field, paste the copied output into the Password field and click the log in button. Mission accomplished. Jenkins is up-and-running without us spending any time writing YAML file with all the resources. It was set up automatically with the administrative user and probably quite a few other goodies. We’ll get to them later. For now, we’ll “play” with a few other `helm` commands that might come in handy. If you are ever unsure about the details behind one of the Helm Charts, you can execute `helm inspect`. 1` helm inspect stable/jenkins ```````````````````````````````````````````````````````````````````````````````````````````````````````````````The output of the `inspect` command is too big to be presented in a book. It contains all the information you might need before installing an application (in this case Jenkins). If you prefer to go through the available Charts visually, you might want to visit [Kubeapps](https://kubeapps.com/) project hosted by [bitnami](https://bitnami.com/). Click on the *Explore Apps* button, and you’ll be sent to the hub with the list of all the official Charts. If you search for Jenkins, you’ll end up on the [page with the Chart’s details](https://hub.kubeapps.com/charts/stable/jenkins). You’ll notice that the info in that page is the same as the output of the `inspect` command. We won’t go back to [Kubeapps](https://kubeapps.com/) since I prefer command line over UIs. A firm grip on the command line helps a lot when it comes to automation, which happens to be the goal of this book. With time, the number of the Charts running in your cluster will increase, and you might be in need to list them. You can do that with the `ls` command. 1` helm ls ``````````````````````````````````````````````````````````````````````````````````````````````````````````````The output is as follows. 1` NAME REVISION UPDATED STATUS CHART NAMESPACE `2` jenkins 1 Thu May ... DEPLOYED jenkins-0.16.1 jenkins `````````````````````````````````````````````````````````````````````````````````````````````````````````````There is not much to look at right now since we have only one Chart. Just remember that the command exists. It’ll come in handy later on. If you need to see the details behind one of the installed Charts, please use the `status` command. 1` helm status jenkins The output should be very similar to the one you saw when we installed the Chart. The only difference is that this time all the Pods are running. Tiller obviously stores the information about the installed Charts somewhere. Unlike most other applications that tend to save their state on disk, or replicate data across multiple instances, tiller uses Kubernetes ConfgMaps to preserve its state. Let’s take a look at the ConfigMaps in the `kube-system` Namespace where tiller is running. ````1` kubectl -n kube-system get cm ```````````````````````````````````````````````````````````````````````````````````````````````````````````The output, limited to the relevant parts, is as follows. ````1` NAME DATA AGE `2` ... `3` jenkins.v1 1 25m `4` ... ``````````````````````````````````````````````````````````````````````````````````````````````````````````We can see that there is a config named `jenkins.v1`. We did not explore revisions just yet. For now, only assume that each new installation of a Chart is version 1. Let’s take a look at the contents of the ConfigMap. ````1` kubectl -n kube-system `\` `2 ` describe cm jenkins.v1 `````````````````````````````````````````````````````````````````````````````````````````````````````````The output is as follows. ````1` `Name``:` `jenkins``.``v1` `2` `Namespace``:` `kube``-``system` `3` `Labels``:` `MODIFIED_AT``=``1527424681` `4` `NAME``=``jenkins` `5` `OWNER``=``TILLER` `6` `STATUS``=``DEPLOYED` `7` `VERSION``=``1` `8` `Annotations``:` `<``none``>` `9` `10` `Data` `11` `====` `12` `release``:` `13` `----` `14` `[``ENCRYPTED` `RELEASE` `INFO``]` `15` `Events``:` `<``none``>` ````````````````````````````````````````````````````````````````````````````````````````````````````````I replaced the content of the release Data with `[ENCRYPTED RELEASE INFO]` since it is too big to be presented in the book. The release contains all the info tiller used to create the first `jenkins` release. It is encrypted as a security precaution. We’re finished exploring our Jenkins installation, so our next step is to remove it. ````1` helm delete jenkins ```````````````````````````````````````````````````````````````````````````````````````````````````````The output shows that the `release "jenkins"` was `deleted`. Since this is the first time we deleted a Helm Chart, we might just as well confirm that all the resources were indeed removed. ````1` kubectl -n jenkins get all ``````````````````````````````````````````````````````````````````````````````````````````````````````The output is as follows. ````1` NAME READY STATUS RESTARTS AGE `2` po/jenkins-... 0/1 Terminating 0 5m `````````````````````````````````````````````````````````````````````````````````````````````````````Everything is gone except the Pod that is still `terminating`. Soon it will disappear as well, and there will be no trace of Jenkins anywhere in the cluster. At least, that’s what we’re hoping for. Let’s check the status of the `jenkins` Chart. ````1` helm status jenkins ````````````````````````````````````````````````````````````````````````````````````````````````````The relevant parts of the output are as follows. ````1` LAST DEPLOYED: Thu May 24 11:46:38 2018 `2` NAMESPACE: jenkins `3` STATUS: DELETED `4` `5` ... ```````````````````````````````````````````````````````````````````````````````````````````````````If you expected an empty output or an error stating that `jenkins` does not exist, you were wrong. The Chart is still in the system, only this time its status is `DELETED`. You’ll notice that all the resources are gone though. When we execute `helm delete [THE_NAME_OF_A_CHART]`, we are only removing the Kubernetes resources. The Chart is still in the system. We could, for example, revert the `delete` action and return to the previous state with Jenkins up-and-running again. If you want to delete not only the Kubernetes resources created by the Chart but also the Chart itself, please add `--purge` argument. ````1` helm delete jenkins --purge ``````````````````````````````````````````````````````````````````````````````````````````````````The output is still the same as before. It states that the `release "jenkins"` was `deleted`. Let’s check the status now after we purged the system. ````1` helm status jenkins `````````````````````````````````````````````````````````````````````````````````````````````````The output is as follows. ````1` `Error``:` `getting` `deployed` `release` `"jenkins"``:` `release``:` `"jenkins"` `not` `found` ````````````````````````````````````````````````````````````````````````````````````````````````This time, everything was removed, and `helm` cannot find the `jenkins` Chart anymore. ### Customizing Helm Installations We’ll almost never install a Chart as we did. Even though the default values do often make a lot of sense, there is always something we need to tweak to make an application behave as we expect. What if we do not want the Jenkins tag predefined in the Chart? What if for some reason we want to deploy Jenkins `2.112-alpine`? There must be a sensible way to change the tag of the `stable/jenkins` Chart. Helm allows us to modify installation through variables. All we need to do is to find out which variables are available. Besides visiting project’s documentation, we can retrieve the available values through the command that follows. ````1` helm inspect values stable/jenkins ```````````````````````````````````````````````````````````````````````````````````````````````The output, limited to the relevant parts, is as follows. ````1` ... `2` Master: `3 ` Name: jenkins-master `4 ` Image: "jenkins/jenkins" `5 ` ImageTag: "lts" `6 ` ... ``````````````````````````````````````````````````````````````````````````````````````````````We can see that within the `Master` section there is a variable `ImageTag`. The name of the variable should be, in this case, sufficiently self-explanatory. If we need more information, we can always inspect the Chart. ````1` helm inspect stable/jenkins `````````````````````````````````````````````````````````````````````````````````````````````I encourage you to read the whole output at some later moment. For now, we care only about the `ImageTag`. The output, limited to the relevant parts, is as follows. ````1` ... `2` | Parameter | Description | Default | `3` | ----------------- | ---------------- | ------- | `4` ... `5` | `Master.ImageTag` | Master image tag | `lts` | `6` ... ````````````````````````````````````````````````````````````````````````````````````````````That did not provide much more info. Still, we do not really need more than that. We can assume that `Master.ImageTag` will allow us to replace the default value `lts` with `2.112-alpine`. If we go through the documentation, we’ll discover that one of the ways to overwrite the default values is through the `--set` argument. Let’s give it a try. ````1` helm install stable/jenkins `\` `2 ` --name jenkins `\` `3 ` --namespace jenkins `\` `4 ` --set Master.ImageTag`=``2`.112-alpine ```````````````````````````````````````````````````````````````````````````````````````````The output of the `helm install` command is almost the same as when we executed it the first time, so there’s probably no need to go through it again. Instead, we’ll wait until `jenkins` rolls out. ````1` kubectl -n jenkins `\` `2 ` rollout status deployment jenkins ``````````````````````````````````````````````````````````````````````````````````````````Now that the Deployment rolled out, we are almost ready to test whether the change of the variable had any effect. First, we need to get the Jenkins address. We’ll retrieve it in the same way as before, so there’s no need to lengthy explanation. ````1` `ADDR``=``$(`kubectl -n jenkins `\` `2 ` get svc jenkins `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].hostname}"``)`:8080 `````````````````````````````````````````````````````````````````````````````````````````As a precaution, please output the `ADDR` variable and check whether the address looks correct. ````1` `echo` `$ADDR` ````````````````````````````````````````````````````````````````````````````````````````Now we can open Jenkins UI. ````1` open `"http://``$ADDR``"` ```````````````````````````````````````````````````````````````````````````````````````This time there is no need even to log in. All we need to do is to check whether changing the tag worked. Please observe the version in the bottom-right corner of the screen. If should be *Jenkins ver. 2.112*. Let’s imagine that some time passed and we decided to upgrade our Jenkins from *2.112* to *2.116*. We go through the documentation and discover that there is the `upgrade` command we can leverage. ````1` helm upgrade jenkins stable/jenkins `\` `2 ` --set Master.ImageTag`=``2`.116-alpine `\` `3 ` --reuse-values ``````````````````````````````````````````````````````````````````````````````````````This time we did not specify the Namespace, but we did set the `--reuse-values` argument. With it, the upgrade will maintain all the values used the last time we installed or upgraded the Chart. The result is an upgrade of the Kubernetes resources so that they comply with our desire to change the tag, and leave everything else intact. The output of the `upgrade` command, limited to the first few lines, is as follows. ````1` Release "jenkins" has been upgraded. Happy Helming! `2` LAST DEPLOYED: Thu May 24 12:51:03 2018 `3` NAMESPACE: jenkins `4` STATUS: DEPLOYED `5` ... `````````````````````````````````````````````````````````````````````````````````````We can see that the release was upgraded. To be on the safe side, we’ll describe the `jenkins` Deployment and confirm that the image is indeed `2.116-alpine`. ````1` kubectl -n jenkins `\` `2 ` describe deployment jenkins ````````````````````````````````````````````````````````````````````````````````````The output, limited to the relevant parts, is as follows. ````1` `Name``:` `jenkins` `2` `Namespace``:` `jenkins` `3` `...` `4` `Pod` `Template``:` `5 ` `...` `6 ` `Containers``:` `7 ` `jenkins``:` `8 ` `Image``:` `jenkins``/``jenkins``:``2.116``-``alpine` `9 ` `...` ```````````````````````````````````````````````````````````````````````````````````The image was indeed updated to the tag `2.116-alpine`. To satisfy my paranoid nature, we’ll also open Jenkins UI and confirm the version there. But, before we do that, we need to wait until the update rolls out. ````1` kubectl -n jenkins `\` `2 ` rollout status deployment jenkins ``````````````````````````````````````````````````````````````````````````````````Now we can open Jenkins UI. ````1` open `"http://``$ADDR``"` `````````````````````````````````````````````````````````````````````````````````Please note the version in the bottom-right corner of the screen. It should say *Jenkins ver. 2.116*. ### Rolling Back Helm Revisions No matter how we deploy our applications and no matter how much we trust our validations, the truth is that sooner or later we’ll have to roll back. That is especially true with third-party applications. While we could roll forward faulty applications we developed, the same is often not an option with those that are not in our control. If there is a problem and we cannot fix it fast, the only alternative is to roll back. Fortunately, Helm provides a mechanism to roll back. Before we try it out, let’s take a look at the list of the Charts we installed so far. ````1` helm list ````````````````````````````````````````````````````````````````````````````````The output is as follows. ````1` NAME REVISION UPDATED STATUS CHART NAMESPACE `2` jenkins 2 Thu May ... DEPLOYED jenkins-0.16.1 jenkins ```````````````````````````````````````````````````````````````````````````````As expected, we have only one Chart running in our cluster. The critical piece of information is that it is the second revision. First, we installed the Chart with Jenkins version 2.112, and then we upgraded it to 2.116. We can roll back to the previous version (`2.112`) by executing `helm rollback jenkins 1`. That would roll back from the revision `2` to whatever was defined as the revision `1`. However, in most cases that is unpractical. Most of our rollbacks are likely to be executed through our CD or CDP processes. In those cases, it might be too complicated for us to find out what was the previous release number. Luckily, there is an undocumented feature that allows us to roll back to the previous version without explicitly setting up the revision number. By the time you read this, the feature might become documented. I was about to start working on it and submit a pull request. Luckily, while going through the code, I saw that it’s already there. Please execute the command that follows. ````1` helm rollback jenkins `0` ``````````````````````````````````````````````````````````````````````````````By specifying `0` as the revision number, Helm will roll back to the previous version. It’s as easy as that. We got the visual confirmation in the form of the “`Rollback was a success! Happy Helming!`” message. Let’s take a look at the current situation. ````1` helm list `````````````````````````````````````````````````````````````````````````````The output is as follows. ````1` NAME REVISION UPDATED STATUS CHART NAMESPACE `2` jenkins 3 Thu May ... DEPLOYED jenkins-0.16.1 jenkins ````````````````````````````````````````````````````````````````````````````We can see that even though we issued a rollback, Helm created a new revision `3`. There’s no need to panic. Every change is a new revision, even when a change means re-applying definition from one of the previous releases. To be on the safe side, we’ll go back to Jenkins UI and confirm that we are using version `2.112` again. ````1` kubectl -n jenkins `\` `2 ` rollout status deployment jenkins `3` `4` open `"http://``$ADDR``"` ```````````````````````````````````````````````````````````````````````````We waited until Jenkins rolled out, and opened it in our favorite browser. If we look at the version information located in the bottom-right corner of the screen, we are bound to discover that it is *Jenkins ver. 2.112* once again. We are about to start over one more time, so our next step it to purge Jenkins. ````1` helm delete jenkins --purge ``````````````````````````````````````````````````````````````````````````### Using YAML Values To Customize Helm Installations We managed to customize Jenkins by setting `ImageTag`. What if we’d like to set CPU and memory. We should also add Ingress, and that would require a few annotations. If we add Ingress, we might want to change the Service type to ClusterIP and set HostName to our domain. We should also make sure that RBAC is used. Finally, the plugins that come with the Chart are probably not all the plugins we need. Applying all those changes through `--set` arguments would end up as a very long command and would constitute an undocumented installation. We’ll have to change the tactic and switch to `--values`. But before we do all that, we need to generate a domain we’ll use with our cluster. We’ll use [nip.io](http://nip.io) to generate valid domains. The service provides a wildcard DNS for any IP address. It extracts IP from the nip.io subdomain and sends it back in the response. For example, if we generate 192.168.99.100.nip.io, it’ll be resolved to 192.168.99.100\. We can even add sub-sub domains like something.192.168.99.100.nip.io, and it would still be resolved to 192.168.99.100\. It’s a simple and awesome service that quickly became an indispensable part of my toolbox. The service will be handy with Ingress since it will allow us to generate separate domains for each application, instead of resorting to paths which, as you will see, are unsupported by many Charts. If our cluster is accessible through *192.168.99.100*, we can have *jenkins.192.168.99.100.nip.io* and *go-demo-3.192.168.99.100.nip.io*. We could use [xip.ip](http://xip.io) instead. For the end-users, there is no significant difference between the two. The main reason why we’ll use nip.io instead of xip.io is integration with some of the tool. Minishift, for example, comes with Routes pre-configured to use nip.io. First things first… We need to find out the IP of our cluster, or the external LB if it is available. The commands that follow will differ from one cluster type to another. If your cluster is running in **AWS** and was created with **kops**, we’ll need to retrieve the hostname from the Ingress Service, and extract the IP from it. Please execute the commands that follow. ````1` `LB_HOST``=``$(`kubectl -n kube-ingress `\` `2 ` get svc ingress-nginx `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].hostname}"``)` `4` `5` `LB_IP``=``"``$(`dig +short `$LB_HOST` `\` `6 ` `|` tail -n `1``)``"` `````````````````````````````````````````````````````````````````````````If your cluster is running in **AWS** and was created as **EKS**, we’ll need to retrieve the hostname from the Ingress Service, and extract the IP from it. Please execute the commands that follow. ````1` `LB_HOST``=``$(`kubectl -n ingress-nginx `\` `2 ` get svc ingress-nginx `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].hostname}"``)` `4` `5` `LB_IP``=``"``$(`dig +short `$LB_HOST` `\` `6 ` `|` tail -n `1``)``"` ````````````````````````````````````````````````````````````````````````If your cluster is running in **Docker For Mac/Windows**, the IP is `127.0.0.1` and all you have to do is assign it to the environment variable `LB_IP`. Please execute the command that follows. ````1` `LB_IP``=``"127.0.0.1"` ```````````````````````````````````````````````````````````````````````If your cluster is running in **minikube**, the IP can be retrieved using `minikube ip` command. Please execute the command that follows. ````1` `LB_IP``=``"``$(`minikube ip`)``"` ``````````````````````````````````````````````````````````````````````If your cluster is running in **GKE**, the IP can be retrieved from the Ingress Service. Please execute the command that follows. ````1` `LB_IP``=``$(`kubectl -n ingress-nginx `\` `2 ` get svc ingress-nginx `\` `3 ` -o `jsonpath``=``"{.status.loadBalancer.ingress[0].ip}"``)` `````````````````````````````````````````````````````````````````````Next, we’ll output the retrieved IP to confirm that the commands worked, and generate a sub-domain `jenkins`. ````1` `echo` `$LB_IP` `2` `3` `HOST``=``"jenkins.``$LB_IP``.nip.io"` `4` `5` `echo` `$HOST` ````````````````````````````````````````````````````````````````````The output of the second `echo` command should be similar to the one that follows. ````1` jenkins.192.168.99.100.nip.io ```````````````````````````````````````````````````````````````````*nip.io* will resolve that address to `192.168.99.100`, and we’ll have a unique domain for our Jenkins installation. That way we can stop using different paths to distinguish applications in Ingress config. Domains work much better. Many Helm charts do not even have the option to configure unique request paths and assume that Ingress will be configured with a unique domain. Now that we have a valid `jenkins.*` domain, we can try to figure out how to apply all the changes we discussed. We already learned that we can inspect all the available values using `helm inspect` command. Let’s take another look. ````1` helm inspect values stable/jenkins ``````````````````````````````````````````````````````````````````The output, limited to the relevant parts, is as follows. ````1` `Master``:` `2` `Name``:` `jenkins-master` `3` `Image``:` `"jenkins/jenkins"` `4` `ImageTag``:` `"lts"` `5` `...` `6` `Cpu``:` `"200m"` `7` `Memory``:` `"256Mi"` `8` `...` `9` `ServiceType``:` `LoadBalancer` `10 ` `# Master Service annotations` `11 ` `ServiceAnnotations``:` `{}` `12 ` `...` `13 ` `# HostName: jenkins.cluster.local` `14 ` `...` `15 ` `InstallPlugins``:` `16 ` `-` `kubernetes:1.1` `17 ` `-` `workflow-aggregator:2.5` `18 ` `-` `workflow-job:2.15` `19 ` `-` `credentials-binding:1.13` `20 ` `-` `git:3.6.4` `21 ` `...` `22 ` `Ingress``:` `23 ` `ApiVersion``:` `extensions/v1beta1` `24 ` `Annotations``:` `25 ` `...` `26` `...` `27` `rbac``:` `28 ` `install``:` `false` `29 ` `...` `````````````````````````````````````````````````````````````````Everything we need to accomplish our new requirements is available through the values. Some of them are already filled with defaults, while others are commented. When we look at all those values, it becomes clear that it would be unpractical to try to re-define them all through `--set` arguments. We’ll use `--values` instead. It will allow us to specify the values in a file. I already prepared a YAML file with the values that will fulfill our requirements, so let’s take a quick look at them. ````1` cat helm/jenkins-values.yml ````````````````````````````````````````````````````````````````The output is as follows. ````1` `Master``:` `2` `ImageTag``:` `"2.116-alpine"` `3` `Cpu``:` `"500m"` `4` `Memory``:` `"500Mi"` `5` `ServiceType``:` `ClusterIP` `6` `ServiceAnnotations``:` `7` `service.beta.kubernetes.io/aws-load-balancer-backend-protocol``:` `http` `8` `InstallPlugins``:` `9` `-` `blueocean:1.5.0` `10 ` `-` `credentials:2.1.16` `11 ` `-` `ec2:1.39` `12 ` `-` `git:3.8.0` `13 ` `-` `git-client:2.7.1` `14 ` `-` `github:1.29.0` `15 ` `-` `kubernetes:1.5.2` `16 ` `-` `pipeline-utility-steps:2.0.2` `17 ` `-` `script-security:1.43` `18 ` `-` `slack:2.3` `19 ` `-` `thinBackup:1.9` `20 ` `-` `workflow-aggregator:2.5` `21 ` `Ingress``:` `22 ` `Annotations``:` `23 ` `nginx.ingress.kubernetes.io/ssl-redirect``:` `"false"` `24 ` `nginx.ingress.kubernetes.io/proxy-body-size``:` `50m` `25 ` `nginx.ingress.kubernetes.io/proxy-request-buffering``:` `"off"` `26 ` `ingress.kubernetes.io/ssl-redirect``:` `"false"` `27 ` `ingress.kubernetes.io/proxy-body-size``:` `50m` `28 ` `ingress.kubernetes.io/proxy-request-buffering``:` `"off"` `29 ` `HostName``:` `jenkins.acme.com` `30` `rbac``:` `31 ` `install``:` `true` ```````````````````````````````````````````````````````````````As you can see, the variables in that file follow the same format as those we output through the `helm inspect values` command. The only difference is in values, and the fact that `helm/jenkins-values.yml` contains only those that we are planning to change. We defined that the `ImageTag` should be fixed to `2.116-alpine`. We specified that our Jenkins master will need half a CPU and 500 MB RAM. The default values of 0.2 CPU and 256 MB RAM are probably not enough. What we set is also low, but since we’re not going to run any serious load (at least not yet), what we re-defined should be enough. The service was changed to `ClusterIP` to better accommodate Ingress resource we’re defining further down. If you are not using AWS, you can ignore `ServiceAnnotations`. They’re telling ELB to use HTTP protocol. Further down, we are defining the plugins we’ll use throughout the book. Their usefulness will become evident in the next chapters. The values in the `Ingress` section are defining the annotations that tell Ingress not to redirect HTTP requests to HTTPS (we don’t have SSL certificates), as well as a few other less important options. We set both the old style (`ingress.kubernetes.io`) and the new style (`nginx.ingress.kubernetes.io`) of defining NGINX Ingress. That way it’ll work no matter which Ingress version you’re using. The `HostName` is set to a value that apparently does not exist. I could not know in advance what will be your hostname, so we’ll overwrite it later on. Finally, we set `rbac.install` to `true` so that the Chart knows that it should set the proper permissions. Having all those variables defined at once might be a bit overwhelming. You might want to go through the [Jenkins Chart documentation](https://hub.kubeapps.com/charts/stable/jenkins) for more info. In some cases, documentation alone is not enough, and I often end up going through the files that form the chart. You’ll get a grip on them with time. For now, the important thing to observe is that we can re-define any number of variables through a YAML file. Let’s install the Chart with those variables. ````1` helm install stable/jenkins `\` `2 ` --name jenkins `\` `3 ` --namespace jenkins `\` `4 ` --values helm/jenkins-values.yml `\` `5 ` --set Master.HostName`=``$HOST` ``````````````````````````````````````````````````````````````We used the `--values` argument to pass the contents of the `helm/jenkins-values.yml`. Since we had to overwrite the `HostName`, we used `--set`. If the same value is defined through `--values` and `--set`, the latter always takes precedence. Next, we’ll wait for `jenkins` Deployment to roll out and open its UI in a browser. ````1` kubectl -n jenkins `\` `2 ` rollout status deployment jenkins `3` `4` open `"http://``$HOST``"` `````````````````````````````````````````````````````````````The fact that we opened Jenkins through a domain defined as Ingress (or Route in case of OpenShift) tells us that the values were indeed used. We can double check those currently defined for the installed Chart with the command that follows. ````1` helm get values jenkins ````````````````````````````````````````````````````````````The output is as follows. ````1` `Master``:` `2` `Cpu``:` `500m` `3` `HostName``:` `jenkins.18.220.212.56.nip.io` `4` `ImageTag``:` `2.116-alpine` `5` `Ingress``:` `6` `Annotations``:` `7` `ingress.kubernetes.io/proxy-body-size``:` `50m` `8` `ingress.kubernetes.io/proxy-request-buffering``:` `"off"` `9` `ingress.kubernetes.io/ssl-redirect``:` `"false"` `10 ` `nginx.ingress.kubernetes.io/proxy-body-size``:` `50m` `11 ` `nginx.ingress.kubernetes.io/proxy-request-buffering``:` `"off"` `12 ` `nginx.ingress.kubernetes.io/ssl-redirect``:` `"false"` `13 ` `InstallPlugins``:` `14 ` `-` `blueocean:1.5.0` `15 ` `-` `credentials:2.1.16` `16 ` `-` `ec2:1.39` `17 ` `-` `git:3.8.0` `18 ` `-` `git-client:2.7.1` `19 ` `-` `github:1.29.0` `20 ` `-` `kubernetes:1.5.2` `21 ` `-` `pipeline-utility-steps:2.0.2` `22 ` `-` `script-security:1.43` `23 ` `-` `slack:2.3` `24 ` `-` `thinBackup:1.9` `25 ` `-` `workflow-aggregator:2.5` `26 ` `Memory``:` `500Mi` `27 ` `ServiceAnnotations``:` `28 ` `service.beta.kubernetes.io/aws-load-balancer-backend-protocol``:` `http` `29 ` `ServiceType``:` `ClusterIP` `30` `rbac``:` `31 ` `install``:` `true` ```````````````````````````````````````````````````````````Even though the order is slightly different, we can easily confirm that the values are the same as those we defined in `helm/jenkins-values.yml`. The exception is the `HostName` which was overwritten through the `--set` argument. Now that we explored how to use Helm to deploy publicly available Charts, we’ll turn our attention towards development. Can we leverage the power behind Charts for our applications? Before we proceed, please delete the Chart we installed as well as the `jenkins` Namespace. ````1` helm delete jenkins --purge `2` `3` kubectl delete ns jenkins ``````````````````````````````````````````````````````````### Creating Helm Charts Our next goal is to create a Chart for the *go-demo-3* application. We’ll use the fork you created in the previous chapter. First, we’ll move into the fork’s directory. ````1` `cd` ../go-demo-3 `````````````````````````````````````````````````````````To be on the safe side, we’ll push the changes you might have made in the previous chapter and then we’ll sync your fork with the upstream repository. That way we’ll guarantee that you have all the changes I might have made. You probably already know how to push your changes and how to sync with the upstream repository. In case you don’t, the commands are as follows. ````1` git add . `2` `3` git commit -m `\` `4` `"Defining Continuous Deployment chapter"` `5` `6` git push `7` `8` git remote add upstream `\` `9` https://github.com/vfarcic/go-demo-3.git `10` `11` git fetch upstream `12` `13` git checkout master `14` `15` git merge upstream/master ````````````````````````````````````````````````````````We pushed the changes we made in the previous chapter, we fetched the upstream repository *vfarcic/go-demo-3*, and we merged the latest code from it. Now we are ready to create our first Chart. Even though we could create a Chart from scratch by creating a specific folder structure and the required files, we’ll take a shortcut and create a sample Chart that can be modified later to suit our needs. We won’t start with a Chart for the *go-demo-3* application. Instead, we’ll create a creatively named Chart *my-app* that we’ll use to get a basic understanding of the commands we can use to create and manage our Charts. Once we’re familiar with the process, we’ll switch to *go-demo-3*. Here we go. ````1` helm create my-app `2` `3` ls -1 my-app ```````````````````````````````````````````````````````The first command created a Chart named *my-app*, and the second listed the files and the directories that form the new Chart. The output of the latter command is as follows. ````1` Chart.yaml `2` charts `3` templates `4` values.yaml ``````````````````````````````````````````````````````We will not go into the details behind each of those files and directories just yet. For now, just note that a Chart consists of files and directories that follow certain naming conventions. If our Chart has dependencies, we could download them with the `dependency update` command. ````1` helm dependency update my-app `````````````````````````````````````````````````````The output shows that `no requirements` were `found in .../go-demo-3/my-app/charts`. That makes sense because we did not yet declare any dependencies. For now, just remember that they can be downloaded or updated. Once we’re done with defining the Chart of an application, we can package it. ````1` helm package my-app ````````````````````````````````````````````````````We can see from the output that Helm `successfully packaged chart and saved it to: .../go-demo-3/my-app-0.1.0.tgz`. We do not yet have a repository for our Charts. We’ll work on that in the next chapter. If we are unsure whether we made a mistake in our Chart, we can validate it by executing `lint` command. ````1` helm lint my-app ```````````````````````````````````````````````````The output is as follows. ````1` ==> Linting my-app `2` [INFO] Chart.yaml: icon is recommended `3` `4` 1 chart(s) linted, no failures ``````````````````````````````````````````````````We can see that our Chart contains no failures, at least not those based on syntax. That should come as no surprise since we did not even modify the sample Chart Helm created for us. Charts can be installed using a Chart repository (e.g., `stable/jenkins`), a local Chart archive (e.g., `my-app-0.1.0.tgz`), an unpacked Chart directory (e.g., `my-app`), or a full URL (e.g., `https://acme.com/charts/my-app-0.1.0.tgz`). So far we used Chart repository to install Jenkins. We’ll switch to the local archive option to install `my-app`. ````1` helm install ./my-app-0.1.0.tgz `\` `2 ` --name my-app `````````````````````````````````````````````````The output is as follows. ````1` `NAME``:` `my``-``app` `2` `LAST` `DEPLOYED``:` `Thu` `May` `24` `13``:``43``:``17` `2018` `3` `NAMESPACE``:` `default` `4` `STATUS``:` `DEPLOYED` `5` `6` `RESOURCES``:` `7` `==>` `v1``/``Service` `8` `NAME` `TYPE` `CLUSTER``-``IP` `EXTERNAL``-``IP` `PORT``(``S``)` `AGE` `9` `my``-``app` `ClusterIP` `100.65``.``227.236` `<``none``>` `80``/``TCP` `1``s` `10` `11` `==>` `v1beta2``/``Deployment` `12` `NAME` `DESIRED` `CURRENT` `UP``-``TO``-``DATE` `AVAILABLE` `AGE` `13` `my``-``app` `1` `1` `1` `0` `1``s` `14` `15` `==>` `v1``/``Pod``(``related``)` `16` `NAME` `READY` `STATUS` `RESTARTS` `AGE` `17` `my``-``app``-``7``f4d66bf86``-``dns28` `0``/``1` `ContainerCreating` `0` `1``s` `18` `19` `20` `NOTES``:` `21` `1``.` `Get` `the` `application` `URL` `by` `running` `these` `commands``:` `22 ` `export` `POD_NAME``=``$``(``kubectl` `get` `pods` `--``namespace` `default` `-``l` `"app=my-app,release=my-a\` `23` `pp"` `-``o` `jsonpath``=``"{.items[0].metadata.name}"``)` `24 ` `echo` `"Visit http://127.0.0.1:8080 to use your application"` `25 ` `kubectl` `port``-``forward` `$POD_NAME` `8080``:``80` ````````````````````````````````````````````````The sample application is a straightforward one with a Service and a Deployment. There’s not much to say about it. We used it only to explore the basic commands for creating and managing Charts. We’ll delete everything we did and start over with a more serious example. ````1` helm delete my-app --purge `2` `3` rm -rf my-app `4` `5` rm -rf my-app-0.1.0.tgz ```````````````````````````````````````````````We deleted the Chart from the cluster, as well as the local directory and the archive we created earlier. The time has come to apply the knowledge we obtained and explore the format of the files that constitute a Chart. We’ll switch to the *go-demo-3* application next. ### Exploring Files That Constitute A Chart I prepared a Chart that defines the *go-demo-3* application. We’ll use it to get familiar with writing Charts. Even if we choose to use Helm only for third-party applications, familiarity with Chart files is a must since we might have to look at them to better understand the application we want to install. The files are located in `helm/go-demo-3` directory inside the repository. Let’s take a look at what we have. ````1` ls -1 helm/go-demo-3 ``````````````````````````````````````````````The output is as follows. ````1` Chart.yaml `2` LICENSE `3` README.md `4` templates `5` values.yaml `````````````````````````````````````````````A chart is organized as a collection of files inside a directory. The directory name is the name of the Chart (without versioning information). So, a Chart that describes *go-demo-3* is stored in the directory with the same name. The first file we’ll explore is *Chart.yml*. It is a mandatory file with a combination of compulsory and optional fields. Let’s take a closer look. ````1` cat helm/go-demo-3/Chart.yaml ````````````````````````````````````````````The output is as follows. ````1` `name``:` `go-demo-3` `2` `version``:` `0.0.1` `3` `apiVersion``:` `v1` `4` `description``:` `A silly demo based on API written in Go and MongoDB` `5` `keywords``:` `6` `-` `api` `7` `-` `backend` `8` `-` `go` `9` `-` `database` `10` `-` `mongodb` `11` `home``:` `http://www.devopstoolkitseries.com/` `12` `sources``:` `13` `-` `https://github.com/vfarcic/go-demo-3` `14` `maintainers``:` `15` `-` `name``:` `Viktor Farcic` `16 ` `email``:` `viktor@farcic.com` ```````````````````````````````````````````The `name`, `version`, and `apiVersion` are mandatory fields. All the others are optional. Even though most of the fields should be self-explanatory, we’ll go through each of them just in case. The `name` is the name of the Chart, and the `version` is the version. That’s obvious, isn’t it? The critical thing to note is that versions must follow [SemVer 2](http://semver.org/) standard. The full identification of a Chart package in a repository is always a combination of a name and a version. If we package this Chart, its name would be *go-demo-3-0.0.1.tgz*. The `apiVersion` is the version of the Helm API and, at this moment, the only supported value is `v1`. The rest of the fields are mostly informational. You should be able to understand their meaning so I won’t bother you with lengthy explanations. The next in line is the LICENSE file. ````1` cat helm/go-demo-3/LICENSE ``````````````````````````````````````````The first few lines of the output are as follows. ````1` The MIT License (MIT) `2` `3` Copyright (c) 2018 Viktor Farcic `4` `5` Permission is hereby granted, free ... `````````````````````````````````````````The *go-demo-3* application is licensed as MIT. It’s up to you to decide which license you’ll use, if any. README.md is used to describe the application. ````1` cat helm/go-demo-3/README.md ````````````````````````````````````````The output is as follows. ````1` This is just a silly demo. ```````````````````````````````````````I was too lazy to write a proper description. You shouldn’t be. As a rule of thumb, README.md should contain a description of the application, a list of the pre-requisites and the requirements, a description of the options available through values.yaml, and anything else you might deem important. As the extension suggests, it should be written in Markdown format. Now we are getting to the critical part. The values that can be used to customize the installation are defined in `values.yaml`. ````1` cat helm/go-demo-3/values.yaml ``````````````````````````````````````The output is as follows. ````1` `replicaCount``:` `3` `2` `dbReplicaCount``:` `3` `3` `image``:` `4` `tag``:` `latest` `5` `dbTag``:` `3.3` `6` `ingress``:` `7` `enabled``:` `true` `8` `host``:` `acme.com` `9` `service``:` `10 ` `# Change to NodePort if ingress.enable=false` `11 ` `type``:` `ClusterIP` `12` `rbac``:` `13 ` `enabled``:` `true` `14` `resources``:` `15 ` `limits``:` `16 ` `cpu``:` `0.2` `17 ` `memory``:` `20Mi` `18 ` `requests``:` `19 ` `cpu``:` `0.1` `20 ` `memory``:` `10Mi` `21` `dbResources``:` `22 ` `limits``:` `23 ` `memory``:` `"200Mi"` `24 ` `cpu``:` `0.2` `25 ` `requests``:` `26 ` `memory``:` `"100Mi"` `27 ` `cpu``:` `0.1` `28` `dbPersistence``:` `29 ` `## If defined, storageClassName: <storageClass>` `30 ` `## If set to "-", storageClassName: "", which disables dynamic provisioning` `31 ` `## If undefined (the default) or set to null, no storageClassName spec is` `32 ` `## set, choosing the default provisioner. (gp2 on AWS, standard on` `33 ` `## GKE, AWS & OpenStack)` `34 ` `##` `35 ` `# storageClass: "-"` `36 ` `accessMode``:` `ReadWriteOnce` `37 ` `size``:` `2Gi` `````````````````````````````````````As you can see, all the things that may vary from one *go-demo-3* installation to another are defined here. We can set how many replicas should be deployed for both the API and the DB. Tags of both can be changed as well. We can disable Ingress and change the host. We can change the type of the Service or disable RBAC. The resources are split into two groups, so that the API and the DB can be controlled separately. Finally, we can change database persistence by specifying the `storageClass`, the `accessMode`, or the `size`. I should have described those values in more detail in `README.md`, but, as I already admitted, I was too lazy to do that. The alternative explanation of the lack of proper README is that we’ll go through the YAML files where those values are used, and everything will become much more apparent. The important thing to note is that the values defined in that file are defaults that are used only if we do not overwrite them during the installation through `--set` or `--values` arguments. The files that define all the resources are in the `templates` directory. ````1` ls -1 helm/go-demo-3/templates/ ````````````````````````````````````The output is as follows. ````1` NOTES.txt `2` _helpers.tpl `3` deployment.yaml `4` ing.yaml `5` rbac.yaml `6` sts.yaml `7` svc.yaml ```````````````````````````````````The templates are written in [Go template language](https://golang.org/pkg/text/template/) extended with add-on functions from [Sprig library](https://github.com/Masterminds/sprig) and a few others specific to Helm. Don’t worry if you are new to Go. You will not need to learn it. For most use-cases, a few templating rules are more than enough for most of the use-cases. With time, you might decide to “go crazy” and learn everything templating offers. That time is not today. When Helm renders the Chart, it’ll pass all the files in the `templates` directory through its templating engine. Let’s take a look at the `NOTES.txt` file. ````1` cat helm/go-demo-3/templates/NOTES.txt ``````````````````````````````````The output is as follows. ````1` `1\. Wait until the applicaiton is rolled out:` `2``kubectl -n` `{{` `.Release.Namespace` `}}` `rollout status deployment` `{{` `template` `"helm.fu\` `3` `llname"` `.` `}}` ```````4` `5` `2\. Test the application by running these commands:` `6` `{{``-` `if` `.Values.ingress.enabled` `}}` ``````7`` curl http://``{{` `.Values.ingress.host` `}}``/demo/hello` `8` `{{``-` `else` `if` `contains` `"NodePort"` `.Values.service.type` `}}` `````9``export PORT=$(kubectl -n` `{{` `.Release.Namespace` `}}` `get svc` `{{` `template` `"helm.fullna\` `10` `me"` `.` `}}` `-o jsonpath="{.spec.ports[0].nodePort}")` `11` `12 `` # If you are running Docker for Mac/Windows` `13 `` export ADDR=localhost` `14` `15 `` # If you are running minikube` `16 `` export ADDR=$(minikube ip)` `17` `18 `` # If you are running anything else` `19 ``export ADDR=$(kubectl -n` `{{` `.Release.Namespace` `}}` `get nodes -o jsonpath="{.items[0\` `20` `].status.addresses[0].address}")` `21` `22 `` curl http://$NODE_IP:$PORT/demo/hello` `23` `{{``-` `else` `}}` ````24 `` If the application is running in OpenShift, please create a Route to enable access.` `25` `26 `` For everyone else, you set ingress.enabled=false and service.type is not set to No\` `27` `dePort. The application cannot be accessed from outside the cluster.` `28` `{{``-` `end` `}}` The content of the NOTES.txt file will be printed after the installation or upgrade. You already saw a similar one in action when we installed Jenkins. The instructions we received how to open it and how to retrieve the password came from the NOTES.txt file stored in Jenkins Chart. That file is our first direct contact with Helm templating. You’ll notice that parts of it are inside `if/else` blocks. If we take a look at the second bullet, we can deduce that one set of instructions will be printed if `ingress` is `enabled`, another if the `type` of the Service is `NodePort`, and yet another if neither of the first two conditions is met. Template snippets are always inside double curly braces (e.g., `{{` and `}}`). Inside them can be (often simple) logic like an `if` statement, as well as predefined and custom made function. An example of a custom made function is `{{ template “helm.fullname” . }}`. It is defined in `_helpers.tpl` file which we’ll explore soon. Variables always start with a dot (`.`). Those coming from the `values.yaml` file are always prefixed with `.Values`. An example is `.Values.ingress.host` that defines the `host` that will be configured in our Ingress resource. Helm also provides a set of pre-defined variables prefixed with `.Release`, `.Chart`, `.Files`, and `.Capabilities`. As an example, near the top of the NOTES.txt file is `{{ .Release.Namespace }}` snippet that will get converted to the Namespace into which we decided to install our Chart. The full list of the pre-defined values is as follows (a copy of the official documentation). * `Release.Name`: The name of the release (not the Chart) * `Release.Time`: The time the chart release was last updated. This will match the Last Released time on a Release object. * `Release.Namespace`: The Namespace the Chart was released to. * `Release.Service`: The service that conducted the release. Usually this is Tiller. * `Release.IsUpgrade`: This is set to `true` if the current operation is an upgrade or rollback. * `Release.IsInstall`: This is set to `true` if the current operation is an install. * `Release.Revision`: The revision number. It begins at 1, and increments with each helm upgrade. * `Chart`: The contents of the Chart.yaml. Thus, the Chart version is obtainable as Chart.Version and the maintainers are in Chart.Maintainers. * `Files`: A map-like object containing all non-special files in the Chart. This will not give you access to templates, but will give you access to additional files that are present (unless they are excluded using .helmignore). Files can be accessed using `{{index .Files “file.name”}}` or using the `{{.Files.Get name}}` or `{{.Files.GetString name}}` functions. You can also access the contents of the file as `[]byte` using `{{.Files.GetBytes}}` * `Capabilities`: A map-like object that contains information about the versions of Kubernetes (`{{.Capabilities.KubeVersion}}`, Tiller (`{{.Capabilities.TillerVersion}}`, and the supported Kubernetes API versions (`{{.Capabilities.APIVersions.Has “batch/v1”}}`) You’ll also notice that our `if`, `else if`, `else`, and `end` statements start with a dash (`-`). That’s the Go template way of specifying that we want all empty space before the statement (when `-` is on the left) or after the statement (when `-` is on the right) to be removed. There’s much more to Go templating that what we just explored. I’ll comment on other use-cases as they come. For now, this should be enough to get you going. You are free to consult template package documentation for more info. For now, the critical thing to note is that we have the `NOTES.txt` file that will provide useful post-installation information to those who will use our Chart. I mentioned `_helpers.tpl` as the source of custom functions and variables. Let’s take a look at it. ````1` cat helm/go-demo-3/templates/_helpers.tpl ``````````````````````````````The output is as follows. ````1` `{{``/` `vim``:` `set` `filetype``=``mustache``:` `/``}}` 2` `{{``/*` `3` `Expand` `the` `name` `of` `the` `chart``.` `4` `*/``}}` ```````````````5` `{{``-` `define` `"helm.name"` -`}}` ``````````````6` `{{``-` `default` `.Chart.Name` `.Values.nameOverride` `|` `trunc` `63` `|` `trimSuffix` `"-"` -`}}` `````````````7` `{{``-` `end` -`}}` ````````````8` `9` `{{``/*` `10` `Create` `a` `default` `fully` `qualified` `app` `name``.` `11` `We` `truncate` `at` `63` `chars` `because` `some` `Kubernetes` `name` `fields` `are` `limited` `to` `this` `(``by` `\` `12` `the` `DNS` `naming` `spec``)``.` `13` `If` `release` `name` `contains` `chart` `name` `it` `will` `be` `used` `as` `a` `full` `name``.` `14` `*/``}}` ```````````15` `{{``-` `define` `"helm.fullname"` -`}}` ``````````16` `{{``-` `$``name` `:=` `default` `.Chart.Name` `.Values.nameOverride` -`}}` `````````17` `{{``-` `if` `contains` `$``name` `.Release.Name` -`}}` ````````18` `{{``-` `.Release.Name` `|` `trunc` `63` `|` `trimSuffix` `"-"` -`}}` ```````19` `{{``-` `else` -`}}` ``````20` `{{``-` `printf` `"%s-%s"` `.Release.Name` `$``name` `|` `trunc` `63` `|` `trimSuffix` `"-"` -`}}` `````21` `{{``-` `end` -`}}` ````22` `{{``-` `end` -`}}` ```````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````````That file is the exact copy of the `_helpers.tpl` file that was created with the `helm create` command that generated a sample Chart. You can extend it with your own functions. I didn’t. I kept it as-is. It consists of two functions with comments that describe them. The first (`helm.name`) returns the name of the chart trimmed to 63 characters which is the limitation for the size of some of the Kubernetes fields. The second function (`helm.fullname`) returns fully qualified name of the application. If you go back to the NOTES.txt file, you’ll notice that we are using `helm.fullname` in a few occasions. Later on, you’ll see that we’ll use it in quite a few other places. Now that NOTES.txt and _helpers.tpl are out of the way, we can take a look at the first template that defines one of the Kubernetes resources. ````1` cat helm/go-demo-3/templates/deployment.yaml ``````````````````````````The output is as follows. ````1` `apiVersion``:` `apps/v1beta2` `2` `kind``:` `Deployment` `3` `metadata``:` `4` `name``:` `{{` `template "helm.fullname" .` `}}` `5` `labels``:` `6` `app``:` `{{` `template "helm.name" .` `}}` `7` `chart``:` `{{` `.Chart.Name` `}}``-{{ .Chart.Version | replace "+" "_" }}` `8` `release``:` `{{` `.Release.Name` `}}` `9` `heritage``:` `{{` `.Release.Service` `}}` `10` `spec``:` `11 ` `replicas``:` `{{` `.Values.replicaCount` `}}` `12 ` `selector``:` `13 ` `matchLabels``:` `14 ` `app``:` `{{` `template "helm.name" .` `}}` `15 ` `release``:` `{{` `.Release.Name` `}}` `16 ` `template``:` `17 ` `metadata``:` `18 ` `labels``:` `19 ` `app``:` `{{` `template "helm.name" .` `}}` `20 ` `release``:` `{{` `.Release.Name` `}}` `21 ` `spec``:` `22 ` `containers``:` `23 ` `-` `name``:` `api` `24 ` `image``:` `"vfarcic/go-demo-3:{{``.Values.image.tag``}}"` `25 ` `env``:` `26 ` `-` `name``:` `DB` `27 ` `value``:` `{{` `template "helm.fullname" .` `}}``-db` `28 ` `readinessProbe``:` `29 ` `httpGet``:` `30 ` `path``:` `/demo/hello` `31 ` `port``:` `8080` `32 ` `periodSeconds``:` `1` `33 ` `livenessProbe``:` `34 ` `httpGet``:` `35 ` `path``:` `/demo/hello` `36 ` `port``:` `8080` `37 ` `resources``:` `38` `{{` `toYaml .Values.resources | indent 10` `}}` `````````````````````````That file defines the Deployment of the *go-demo-3* API. The first thing I did was to copy the definition from the YAML file we used in the previous chapters. Afterwards, I replaced parts of it with functions and variables. The `name`, for example, is now `{{ template "helm.fullname" . }}`, which guarantees that this Deployment will have a unique name. The rest of the file follows the same logic. Some things are using pre-defined values like `{{ .Chart.Name }}` and `{{ .Release.Name }}`, while others are using those from the `values.yaml`. An example of the latter is `{{ .Values.replicaCount }}`. The last line contains a syntax we haven’t seen before. `{{ toYaml .Values.resources | indent 10 }}` will take all the entries from the `resources` field in the `values.yaml`, and convert them to YAML format. Since the final YAML needs to be correctly indented, we piped the output to `indent 10`. Since the `resources:` section of `deployment.yaml` is indented by eight spaces, indenting the entries from `resources` in `values.yaml` by ten will put them just two spaces inside it. Let’s take a look at one more template. ````1` cat helm/go-demo-3/templates/ing.yaml ````````````````````````The output is as follows. ````1` `{{``- if .Values.ingress.enabled -``}}` `2` `{{``- $serviceName` `:``= include "helm.fullname" . -``}}` `3` `apiVersion``:` `extensions/v1beta1` `4` `kind``:` `Ingress` `5` `metadata``:` `6` `name``:` `{{` `template "helm.fullname" .` `}}` `7` `labels``:` `8` `app``:` `{{` `template "helm.name" .` `}}` `9` `chart``:` `{{` `.Chart.Name` `}}``-{{ .Chart.Version | replace "+" "_" }}` `10 ` `release``:` `{{` `.Release.Name` `}}` `11 ` `heritage``:` `{{` `.Release.Service` `}}` `12 ` `annotations``:` `13 ` `ingress.kubernetes.io/ssl-redirect``:` `"false"` `14 ` `nginx.ingress.kubernetes.io/ssl-redirect``:` `"false"` `15` `spec``:` `16 ` `rules``:` `17 ` `-` `http``:` `18 ` `paths``:` `19 ` `-` `backend``:` `20 ` `serviceName``:` `{{` `$serviceName` `}}` `21 ` `servicePort``:` `8080` `22 ` `host``:` `{{` `.Values.ingress.host` `}}` `23` `{{``- end -``}}` ```````````````````````That YAML defines the Ingress resource that makes the API Deployment accessible through its Service. Most of the values are the same as in the Deployment. There’s only one difference worthwhile commenting. The whole YAML is enveloped in the `{{- if .Values.ingress.enabled -}}` statement. The resource will be installed only if `ingress.enabled` value is set to `true`. Since that is already the default value in `values.yaml`, we’ll have to explicitly disable it if we do not want Ingress. Feel free to explore the rest of the templates. They are following the same logic as the two we just described. There’s one potentially significant file we did not define. We have not created `requirements.yaml` for *go-demo-3*. We did not need any. We will use it though in one of the next chapters, so I’ll save the explanation for later. Now that we went through the files that constitute the *go-demo-3* Chart, we should `lint` it to confirm that the format does not contain any apparent issues. ````1` helm lint helm/go-demo-3 ``````````````````````The output is as follows. ````1` ==> Linting helm/go-demo-3 `2` [INFO] Chart.yaml: icon is recommended `3` `4` 1 chart(s) linted, no failures `````````````````````If we ignore the complaint that the icon is not defined, our Chart seems to be defined correctly, and we can create a package. ````1` helm package helm/go-demo-3 -d helm ````````````````````The output is as follows. ````1` Successfully packaged chart and saved it to: helm/go-demo-3-0.0.1.tgz ```````````````````The `-d` argument is new. It specified that we want to create a package in `helm` directory. We will not use the package just yet. For now, I wanted to make sure that you remember that we can create it. ### Upgrading Charts We are about to install the *go-demo-3* Chart. You should already be familiar with the commands, so you can consider this as an exercise that aims to solidify what you already learned. There will be one difference when compared to the commands we executed earlier. It’ll prove to be a simple, and yet an important one for our continuous deployment processes. We’ll start by inspecting the values. ````1` helm inspect values helm/go-demo-3 ``````````````````The output is as follows. ````1` `replicaCount``:` `3` `2` `dbReplicaCount``:` `3` `3` `image``:` `4` `tag``:` `latest` `5` `dbTag``:` `3.3` `6` `ingress``:` `7` `enabled``:` `true` `8` `host``:` `acme.com` `9` `route``:` `10 ` `enabled``:` `true` `11` `service``:` `12 ` `# Change to NodePort if ingress.enable=false` `13 ` `type``:` `ClusterIP` `14` `rbac``:` `15 ` `enabled``:` `true` `16` `resources``:` `17 ` `limits``:` `18 ` `cpu``:` `0.2` `19 ` `memory``:` `20Mi` `20 ` `requests``:` `21 ` `cpu``:` `0.1` `22 ` `memory``:` `10Mi` `23` `dbResources``:` `24 ` `limits``:` `25 ` `memory``:` `"200Mi"` `26 ` `cpu``:` `0.2` `27 ` `requests``:` `28 ` `memory``:` `"100Mi"` `29 ` `cpu``:` `0.1` `30` `dbPersistence``:` `31 ` `## If defined, storageClassName: <storageClass>` `32 ` `## If set to "-", storageClassName: "", which disables dynamic provisioning` `33 ` `## If undefined (the default) or set to null, no storageClassName spec is` `34 ` `## set, choosing the default provisioner. (gp2 on AWS, standard on` `35 ` `## GKE, AWS & OpenStack)` `36 ` `##` `37 ` `# storageClass: "-"` `38 ` `accessMode``:` `ReadWriteOnce` `39 ` `size``:` `2Gi` `````````````````We are almost ready to install the application. The only thing we’re missing is the host we’ll use for the application. You’ll find two commands below. Please execute only one of those depending on your Kubernetes flavor. If you are **NOT** using **minishift**, please execute the command that follows. ````1` `HOST``=``"go-demo-3.``$LB_IP``.nip.io"` If you are using minishift, you can retrieve the host with the command that follows. ````1` `HOST``=``"go-demo-3-go-demo-3.``
(
‘
m
i
n
i
s
h
i
f
t
i
p
‘
)
‘
‘
.
n
i
p
.
i
o
"
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
‘
N
o
m
a
t
t
e
r
h
o
w
y
o
u
r
e
t
r
i
e
v
e
d
t
h
e
h
o
s
t
,
w
e
’
l
l
o
u
t
p
u
t
i
t
s
o
t
h
a
t
w
e
c
a
n
c
o
n
f
i
r
m
t
h
a
t
i
t
l
o
o
k
s
O
K
.
‘
‘
‘
‘
1
‘
‘
e
c
h
o
‘
‘
(`minishift ip`)``.nip.io"` ```````````````No matter how you retrieved the host, we’ll output it so that we can confirm that it looks OK. ````1` `echo` `
(‘minishiftip‘)‘‘.nip.io"‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘‘Nomatterhowyouretrievedthehost,we’lloutputitsothatwecanconfirmthatitlooksOK.‘‘‘‘1‘‘echo‘‘HOST` ``````````````In my case, the output is as follows. ````1` go-demo-3.192.168.99.100.nip.io `````````````Now we are finally ready to install the Chart. However, we won’t use `helm install` as before. We’ll use `upgrade` instead. ````1` helm upgrade -i `` `2 ` go-demo-3 helm/go-demo-3 `` `3 ` --namespace go-demo-3 `` `4 ` --set image.tag`=``1`.0 `` `5 ` --set ingress.host`=``$HOST` `` `6 ` --reuse-values ````````````The reason we are using `helm upgrade` this time lies in the fact that we are practicing the commands we hope to use inside our CDP processes. Given that we want to use the same process no matter whether it’s the first release (install) or those that follow (upgrade). It would be silly to have `if/else` statements that would determine whether it is the first release and thus execute the install, or to go with an upgrade. We are going with a much simpler solution. We will always upgrade the Chart. The trick is in the `-i` argument that can be translated to “install unless a release by the same name doesn’t already exist.” The next two arguments are the name of the Chart (`go-demo-3`) and the path to the Chart (`helm/go-demo-3`). By using the path to the directory with the Chart, we are experiencing yet another way to supply the Chart files. In the next chapter will switch to using `tgz` packages. The rest of the arguments are making sure that the correct tag is used (`1.0`), that Ingress is using the desired host, and that the values that might have been used in the previous upgrades are still the same (`–reuse-values`). If this command is used in the continuous deployment processes, we would need to set the tag explicitly through the `–set` argument to ensure that the correct image is used. The host, on the other hand, is static and unlikely to change often (if ever). We would be better of defining it in `values.yaml`. However, since I could not predict what will be your host, we had to define it as the `–set` argument. Please note that minishift does not support Ingress (at least not by default). So, it was created, but it has no effect. I thought that it is a better option than to use different commands for OpenShift than for the rest of the flavors. If minishift is your choice, feel free to add `–set ingress.enable=false` to the previous command. The output of the `upgrade` is the same as if we executed `install` (resources are removed for brevity). `
681

被折叠的 条评论
为什么被折叠?



