Edenmal

Sysadmin Garden of Eden

KubeCon 2017 Austin

Table of Contents
  1. 1. Thanks!
  2. 2. Day #0
  3. 3. Day #1
    1. 3.1. Rook Team + Booth
    2. 3.2. Morning Keynotes
      1. 3.2.1. Keynote: A Community of Builders: CloudNativeCon Opening Keynote - Dan Kohn, Executive Director, Cloud Native Computing Foundation
      2. 3.2.2. Keynote: CNCF Project Updates - Michelle Noorali, Senior Software Engineer, Microsoft Azure
      3. 3.2.3. Keynote: Accelerating the Digital Transformation - Imad Sousou, VP, Software Services Group & GM, OpenSource Technology Center, Intel Corporation
      4. 3.2.4. Keynote: Cloud Native CD: Spinnaker and the Culture Behind the Tech - Dianne Marsh, Director of Engineering, Netflix
      5. 3.2.5. Keynote: Cloud Native at AWS - Adrian Cockcroft, Vice President Cloud Architecture Strategy, Amazon Web Services
    3. 3.3. Talk: The True Costs of Running Cloud Native Infrastructure [B] - Dmytro Dyachuk, Pax Automa
    4. 3.4. Talk: CRI-O: All the Runtime Kubernetes Needs, and Nothing More - Mrunal Patel, Red Hat
    5. 3.5. Talk: Building a Secure, Multi-Protocol and Multi-Tenant Cluster for Internet-Facing Services [A] - Bich Le, Platform9
    6. 3.6. Evening Keynotes
      1. 3.6.1. Keynote: Making Sense of the Service Mesh - Ben Sigelman, LightStep
      2. 3.6.2. Keynote: Kubernetes: This Job is Too Hard: Building New Tools, Patterns and Paradigms to Democratize Distributed System Development - Brendan Burns, Distinguished Engineer, Microsoft
      3. 3.6.3. Keynote: Keynote: Can 100 Million Developers Use Kubernetes? - Alexis Richardson, CEO, Weaveworks
    7. 3.7. Community Awards
    8. 3.8. Party
  4. 4. Day #2
    1. 4.1. Morning Keynotes
      1. 4.1.1. Keynote: KubeCon Opening Keynote - Project Update - Kelsey Hightower, Staff Developer Advocate, Google
      2. 4.1.2. Keynote: Kubernetes Secret Superpower - Chen Goldberg & Anthony Yeh, Google
      3. 4.1.3. Keynote: Red Hat: Making Containers Boring (again) - Clayton Coleman, Architect, Kubernetes and OpenShift, Red Hat
      4. 4.1.4. Keynote: Pushing the Limits of Kubernetes with Game of Thrones - Zihao Yu & Illya Chekrygin, HBO
      5. 4.1.5. Keynote: Progress Toward Zero Trust Kubernetes Networks - Spike Curtis, Senior Software Engineer, Tigera
      6. 4.1.6. Keynote: The Road Ahead on the Kubernetes Journey - Craig McLuckie, CEO, Heptio
    2. 4.2. Talk: Extending Kubernetes 101 [A] - Travis Nielsen, Quantum Corp
    3. 4.3. Talk: Building Helm Charts From the Ground Up: An Introduction to Kubernetes [I] - Amy Chen, Heptio
    4. 4.4. Food: Tacos by RedHat
    5. 4.5. Talk: Vault and Secret Management in Kubernetes [I] - Armon Dadgar, HashiCorp
  5. 5. Day #3
    1. 5.1. Keynotes Morning
      1. 5.1.1. Keynote: Opening Remarks + Keynote: Kubernetes Community - Sarah Novotny, Head of Open Source Strategy, Google Cloud Platform, Google
      2. 5.1.2. Keynote: Kubernetes at GitHub - Jesse Newland, Principal Site Reliability Engineer, GitHub
      3. 5.1.3. Keynote: Manage the App on Kubernetes - Brandon Philips, CTO, CoreOS
      4. 5.1.4. Keynote: What’s Next? Getting Excited about Kubernetes in 2018 - Clayton Coleman, Architect, Kubernetes and OpenShift, Red Hat
      5. 5.1.5. Keynote: What is Kubernetes? - Brian Grant, Principal Engineer, Google
    2. 5.2. Talk: Disaster Recovery for your Kubernetes Clusters [I] - Andy Goldstein & Steve Kriss, Heptio
    3. 5.3. Talk: One Chart to Rule Them All: Continuous Deployment with Helm at Ticketmaster - Michael Goodness & Raphael Deem, Ticketmaster
    4. 5.4. Talk: Economics of using Local Storage Attached to VMs on Cloud Providers [I] - Pavel Snagovsky, Quantum
    5. 5.5. “Find A Job | Post A Job | Get Involved” Wall
  6. 6. Summary

NOTE This post is work in progress! There are still unfinished parts to be found. This note will be removed when this post has been finished.
NOTE All credit for the slides in the pictures goes to their creators!

Thanks!

A huge thanks goes out to Quantum Corp for being so kind to sponsor my trip! Checkout their Twitter @QuantumCorp and @rook_io.


Day #0

NOTE If you don’t want to read about my flight and stay, skip to Day #1.
TL;DR First flight in a “long” time and alone, nervous as hell.

KubeCon: Flight - Boarding the plane to KubeCon

“Started from STR (Stuttgart) now we’re in the sky.”
Totally not a “rip off” of a songtext by Drake

KubeCon: Flight - Look out of airplane window

I had a pit stop in Atlanta, and then flew to Austin. The flight to Austin was a bit rough because it just started raining when the airplane departed.
After a total of about 12 hours, I finally arrived in Austin in the hotel. Pretty exhausted, but ready for KubeCon!
Saying it here already, I was not really jet lagged, but I heard that flying back will definitely jet lag me. But well, I will enjoy my days off after KubeCon to “cure” my jet lag and get ready for work again.


Day #1

TODO

INFO The quality of the photos in this post varies pretty hard. This is caused by me using a smartphone camera to take the photos.

I was pretty lucky to be there early, as I got my badge instantly. Others hadn’t that much luck later on and had to wait a good time.

Rook Team + Booth

NOTE If you don’t want to know about the rook team + community photos and the booth, skip to Day #1 - Morning Keynotes.
INFO These images were mostly taken on friday, the third day of KubeCon.

I was welcomed at the rook both with swag from the rook team for rook. I also got the awesome green rook birthday shirt, check it out on Amazon, part of the rook team and I wear it in the photos below.
Me with rook swag

We also took photos of the team and community users who were there.

KubeCon: Rook Team + Community photo #1KubeCon: Rook Team + Community photo #2

And the booth in complete beauty looked like this:
KubeCon: Rook booth Shot #1
Please note the image was actually a photo sphere, that is why it looks a bit “bugged”.

KubeCon: Rook booth Shot #2
The rook booth, logo and stickers are looking awesome, shoutout to their art designer.

Morning Keynotes

KubeCon: Keynote Day #1 Shot #1

Keynote: A Community of Builders: CloudNativeCon Opening Keynote - Dan Kohn, Executive Director, Cloud Native Computing Foundation

There are over 4k+ people at the KubeCon. That is more than the last three events together. Even in point of terror attacks, we, the community/people “continue developing, building” our awesome projects.
Additionally to all the awesome tracks there has been the first Serverless track at KubeCon.
“Kubernetes is the Linux of the cloud”, it is Open Source Infrastructure that works anywhere.
It is amazing to see the statistics of the Kubernetes and other related projects at devstats.k8s.io.
Containers have pushed the boundaries allowing us for quick/fast delivery of applications.
Today there are hundreds of Meetups around the world about Kubernetes and overall cloud native. There are free EdX courses by CNCF/Kubernetes available. Also the Certified Kubernetes Administrator “programm has over 600 people already registered.

Keynote: CNCF Project Updates - Michelle Noorali, Senior Software Engineer, Microsoft Azure

A goal of the CNCF now is to reach commercial sustainability with the projects.
The CNCF member count is “sky rocketing” and currently at over 150+ members. This helps the community through funds to support projects.
Stats about the CNCF:

The next KubeCon will be held in Copenhagen on May 2018, additionally there will be a KubeCon in China, Shanghai in Novemeber 2018 to “compensate” the increasing usage in the Asian market of CNCF projects.

Kubernetes was the first project donated to the CNCF. This is especially awesome to look at how cloud native Kubernetes is. It allows scaling, allow for cross cloud deployments and much more.
Container Storage Interface (short CSI) allows simple and no vendor lockin storage integration for containers in general but will also soon be coming to Kubernetes.
CoreDNS v1.0 released, in Kubernetes v1.9.x it will be available as a replacement for kube-dns.
Containerd (orignally built by Docker) has released version 1.0.0, there is also a (Kubernetes) Container Runtime Interface (short CRI) implementation available.
CoreOS’ rkt is security-minded and also CRI compatible. CoreOS “created” (and is currently maintaining) Container Network Interface (short CNI). CNI makes the network layer pluggable for not only containers.
If you haven’t know yet, “everyone” uses CNI, Kubernetes, Docker, rkt, CRI-O, etc).

Observability:

Fluentd logging flow

Relibaility:

Security:

Keynote: Accelerating the Digital Transformation - Imad Sousou, VP, Software Services Group & GM, OpenSource Technology Center, Intel Corporation

There is a new container runtime by Intel called Kata containers (“Kata” means “Trust” in Greek). It is especially taking a look at “isolation” of containers. Containers “normally” have less isolation due to a) more applications running on one host/node and b) the containers are most of the time just separated by Kernel CGroups. It is kinda always a “Speed OR Security” (Container VS VMs).
Using Intel(c) Clear Containers and Hyper’s runV allow hardware accelerated containers. Kata containers are compliant to the CRI specification.
Additionally good for the Kata container project is that a lot of contributors are not only from Intel(c) but also from other companies (“independent”). If you are interested get involved in the project!

Also there was a mention of the AlibabaCloud about their new X-Dragon bare metal cloud servers that can serve up to 4.5 Million pps. Alibabacloud works on integration for their cloud platform to Kubernetes

WIP The content from here on below is still work in progress! Keep that in mind when reading it.

Keynote: Cloud Native CD: Spinnaker and the Culture Behind the Tech - Dianne Marsh, Director of Engineering, Netflix

KubeCon: Keynote Netflix Shot #1

Netflix: Culture impacts Tools/Tech. “Freedom & Responsibility” for employes. “Share information openly, broadly, and deliberately”.
Pressure from Spinnaker and Edge Center team. Every team solved their “own” problems fast, but didn’t catch up to each other.
“Courage and Selflessness”, an engineer said that he isn’t able to keep up with the other project.
In 2015 release of Spinnaker as open source. Today it supports multiple cloud providers to due the pluggable and multi cloud provider architecture from the begining on.
Spinnaker is continous integration (+ deployment) tool. Only Active Active applications.
“You better understand your service than anybody else”
“Guardrails, not Gates” <- THIS, “Engineers decide when to deploy”, “Am I ready to deploy without a manual step?”

KubeCon: Keynote Netflix Shot #2

Criticality to the businnes for deployments; “Increasing interest in delegation(bility)”
“Think carefully about the tools you chose, are you fighting the culture?”

Keynote: Cloud Native at AWS - Adrian Cockcroft, Vice President Cloud Architecture Strategy, Amazon Web Services

AWS works to integrate cloud native components into ECS

AWS: “Grow Communities, Improve Code, Increase Contributions”
Kubernetes scalability testing will be moved to AWS.
AWS Network integrated into CNI, already upstream. Just use CNI and you “get” AWS networking
Bare metal available now on AWS too
Fargate takes away “everything”, kind of like SaaS but you provide the application. When you don’t want to run the instances below your application.
AWS EKS (AWS’s own GKE): “An Open Source Kubernetes Experience”, Native AWS integration, IAM authentication with Heptio; also Kube2IAM. Currently partly work in progress.

Talk: The True Costs of Running Cloud Native Infrastructure [B] - Dmytro Dyachuk, Pax Automa

I haven’t gotten the first part of the talk.
The part I saw was pretty interesting.
Most costs are the compute, storage second, third people/staff.
Storage is second because IOPS. You need good enough hardware to deliver the IOPS you need and want. This shouldn’t be overlooked during planning.

Aggregated value of how often/up’n’down the workloads scale, is reported to be not that high as anticipated.
Through the dynamic with Cloud deployments there is no issue for resources, as they can be added on the fly and not that easily like with 1:1 deployments.
More and more increase the count of reserved resources over time, this potentially also applies to limits for deployed applications.
This allows to make sure that for default and new demand that there are enough resources available for use.
Over provisioned resources can be used for example for batch jobs.

KubeCon Talk: Colo, AWS side by side

Through this money can be saved. To be able to do this the right analytical values need to be taken into account.

KubeCon Talk: Colo Provisioning workload diagram

At one point a cloud provider (in the talk AWS as an example), the Colo wins as the costs “stay” on a continual level.

KubeCon Talk: Colo vs AWS cost Cross over point

It depends on the change of the workload and what the application(s) need for dynamic scaling. For example a CPU heavy workload may be better run on AWS as CPU is cheaper to up and down scale quickly on demand.
In other cases Colo for storage is cheaper due to the cost for the storage servers.
Additionally for Colo it also only stays cheap(er) at a certain scale of the total workloads.

My take away, plan at least for one cloud provider and Colo/bare metal.

Talk: CRI-O: All the Runtime Kubernetes Needs, and Nothing More - Mrunal Patel, Red Hat

CRI-O utilizies runC to run a container. CRI-O seems to allow an “init” system for the container process that does the reaping of “detached” processes.
The integration into Kubernetes also seem to be simple, it passes over 300 tests for PRs in Kubernetes without any changes to Kubernetes in “favor” of CRI-O.
It normally uses CNI to configure the network of a container.

Looks like a good alternative to Docker (and other runtimes) when running Kubernetes.

Next steps for CRI-O will be to release version 1.9. It will track and support each Kubernetes versions.
The blog is available at: CRI-O Medium.
For OpenShift it is planned to use CRI-O.
It is full feature compatible to running Kubernetes with Docker. Log format is different to Docker’s format. Everything that directly consumes the logs needs to use the CRI-O format (like Kubernetes cluster addon Fluentd).
CRI-O is tied to Kubernetes (and should be tested with Kubernetes too). When running something else than containers for Kubernetes, one should consider evaluating the available runtimes to see which fits the workload.

A switch to CRI-O for 100% Kubernetes environments seems good to do, as depending on how you update Kubernetes, you may not have any additional steps (maybe even less).

Talk: Building a Secure, Multi-Protocol and Multi-Tenant Cluster for Internet-Facing Services [A] - Bich Le, Platform9

They have a project named Decco that manages Namespaces and Network Policies for them.
Decco by Platform9

KubeCon Talk: Decco Traffic Management Slide

KubeCon Talk: Traffic Flow TCP Slide

KubeCon Talk: Traffic Flow HTTP Slide

Looks pretty interesting, possible a good thing for companies looking to regulate namespace creation and secutiry.

Evening Keynotes

Haven’t been to many talks/presentations yet due to interesting chats/discussions with the rook team and other nice people at KubeCon.

Keynote: Making Sense of the Service Mesh - Ben Sigelman, LightStep

“A Service Mesh can help”, because we write the same things over and over again! Through reduction of one big service into many microservices this can be achieved.
To help reduce the “Ahh which microservice is broken?” during on-call, a distributed tracing tool is a good “weapon” against it.
OpenTracing can help a certain amount to keep the difference between tracing formats between multiple languages small.

The big part of distributed tracing hereby is having information for a trace from first service to service with all the information required to debug any issues (be it performance or errors/exceptions).
When the services are more complex this can be a big hassle (services that queue requests, etc).

DaaS (Donuts as a Service): “Scaling fast and running into performance issues”.

Jaeger allows you to view the traces in a detailed fashion, when you have the data.
Analyzing traces on a huge business level is harder the more concurrency of overall requests you are having.
Currently there are sidecars available for some protocols/applications already (nginx, Conduit).

Examples using Donuts are pretty nice.

Keynote: Kubernetes: This Job is Too Hard: Building New Tools, Patterns and Paradigms to Democratize Distributed System Development - Brendan Burns, Distinguished Engineer, Microsoft

Enable people to build distributed applications.
A lot of tools/steps before deployment. Before you can think of your application you have to go throuh the “whole” process of what to use.
A problem with development would be that some values for example port 80 specified in the application, the Dockerfile and Kubernetes manifest makes it prone to errors when one change in these files has been forgotten.

There should be a “Everything in one place” that contains such information. The second principle is “Build libraries, not platforms”. Third is “Encourage sharing and re-use”.
I personally think that the second and third are especially connected with each other. Sharing of a library allows to not build platforms.

Language idiomatic cloud
Takeaways:

When looking into which “item” you can map to the following:

There is nothing “universal” for a “Standard Lib.”.
This is where Metaparticle comes in and takes allows applications to for example know how to build, push and deploy itself with everything like sharding in mind.
Additionally to reduce A bridge to gap the “problems” with knowledge transfer for building distributed applications and that code for “simple” distributed applications tasks (locking, etc) isn’t written over and over again.

Keynote: Keynote: Can 100 Million Developers Use Kubernetes? - Alexis Richardson, CEO, Weaveworks

For the “bot” that colorizes black and white images, OpenFaaS (Functions as a Service) project, WeaveNetworks and GKE.
When something (an application) could be “merged”, it will reduce the time needed to re-do a lot of work/code.
“What are we building for?” -> The future of today’s kids.

Community Awards

Congratulations to all the winners of the awards!
Thank you for your contributions to the community!

Party

There was a marching band playing after the Keynotes in the sponsor showcase:

KubeCon: Party marching band


Day #2

Morning Keynotes

KubeCon: Keynote Day #2 Shot #1

Keynote: KubeCon Opening Keynote - Project Update - Kelsey Hightower, Staff Developer Advocate, Google

KubeCon: Keynote Kelsey Hightower at it again

“Kubernetes changelog is boring” because Kubernetes is now on a level that you can build stuff on top of it.
Voice controlled Kubernetes cluster creation on Google Cloud Platform.
If you need to create a ticket to deploy, you should have separate clusters.
“No one cares” (devs) don’t want kubectl on their systems.
“Wanted is that when a code change is done, the dev gets a URL to where it is running (even better get metrics for it)” (GitLab style?)
Config is “Implementation details”.
“Be in shame when you build your releases on a laptop”
A full workflow should be simply available for applications.

“Visibility for users, ‘removes’ the need for users to get access/tools (to kubectl)”
“Don’t rebuild the image between QA and Live”

Emergencies should allow to go without pupelines.

Keynote: Kubernetes Secret Superpower - Chen Goldberg & Anthony Yeh, Google

A large eco system around Kubernetes.

KubeCon: Keynote Kubernetes' system layers

Kubernetes is ready for us to extend on/build on.

Automation: Reconcilation (loops)
Kubernetes is build for chaos as a cloud native should be able to handle.

It is simple to build an extension for the Kubernetes API server.

KubeCon: Keynote Kubernetes extension from flexible to easier

KubeCon: Keynote rook mentioned on a slide

YES!! rook on a slide during the Keynotes.

A project to extend Kubernetes on a meta level?: github GoogleCloudPlatform/kube-metacontroller
Is a CRD controller. It is not in the Kubernetes lifecycle.

KubeCon: Keynote kube-metacontroller demo

The demo was about re-implementing StatefulSets as CatSets with a “custom” Spec.
New features are more likely with this project from users and not from Kubernetes directly anymore. Users can now just implement it as they want/need it.

Extensibility is about empowering. Google and IBM builts Istio using CRDs to make it easy to use.

Keynote: Red Hat: Making Containers Boring (again) - Clayton Coleman, Architect, Kubernetes and OpenShift, Red Hat

Infrastructure allows us to turn the impossible into ordinary.
Lots of applications send lots of applications. Fixes for issues with the metrics and overall running Kubernetes have been submitted.

KubeCon: Keynote Making Kubernetes boring

etcd3 helped fix issues with scaling. -66% memory use.

Boring more of means a certain maturity of a project and not the death of it.

Keynote: Pushing the Limits of Kubernetes with Game of Thrones - Zihao Yu & Illya Chekrygin, HBO

They are talking about their experience with/to Kubernetes.

Even a cloud like AWS has limits in the end.

One of the reasons why they use Kubernetes is that the “Batteries are included”.
Using Terraform to create the Kubernetes clusters. A fixed number of masters, a failed one will be replaced. Nodes are up’n’down scaling all the time.
Still in the begining some issues, like Prometheus pod is rescheduled on scale down.
They register their minons with taints depending if their were either more of scaled instance or there by “default”/reserved.
Additionally they have backup minions if there is no more instance type available.
type: ClusterIP + Ingress have problems keeping up with burst traffic. type: LoadBalancer is limited by the AWS API rate limit.

KubeCon: Keynote Networking Services

Flannel has problems scaling beyond a cerating point it seems.

KubeCon: Keynote Flannel scaling problems

DNS ndots:5 issues because of a lot of “bad” requests (see image).

KubeCon: Keynote kube-dns ndots issue

kube-dns tunning increase --cache + --address + --server=/homeboxoffice.com/10.1.2.3#53

KubeCon: Keynote kube-dns tunning

Using Prometheus + cAdvisor is not so good with 300 nodes @40 core each + 20k containers.

KubeCon: Keynote Kubernetes Telemetry

“Were We Ready?”:

  • Load test
  • Load test more!
  • Load test the SHIT Out of it!!!

HBO’s advice:

KubeCon: Keynote HBO's advice

Keynote: Progress Toward Zero Trust Kubernetes Networks - Spike Curtis, Senior Software Engineer, Tigera

Most clouds use Calico Project for their network security policy application.
Attacks have become more sophisticated.

KubeCon: Keynote evolution of secure application connectivity

That is way network segmentation has even more has become a thing.
Isitio handles it transparently to the application.
Announced that Calico support for Kubernetes network security policy and Istio service mesh.
Kubernetes allows through the serviceAccount to “enforce” an authentication token for every Pod.

By reducing the trust in the network the security is increased. Also named Zero Trust network model.
github.com projectcalico/app-policy
Tigera announces CNX Tigera CNX for businesses.

Keynote: The Road Ahead on the Kubernetes Journey - Craig McLuckie, CEO, Heptio

KubeCon: Keynote Road Ahead on the Kubernetes Journey Shot #1

“The community gotta figure it (the future) out”

KubeCon: Keynote Road Ahead overview

If a dev needs to wait days to get a VM to develop, the productivity is in the cellar.
In the cloud everything is kinda “XaaS” (“X as a Service”).
Disaster recovery is very important in a world of microservices, where a lot of services depend on each other.

KubeCon: Keynote Seperation of concerns

Cloud providers have, good ofr us, a upstream conformance Kubernetes as a Service. It works everywhere the same.
That is what a tool like heptio/sonobuoy is for, running/packaging tests to run on multi-cloud.
Multi-cloud is simply the dynamic to move between clouds and get what you need, where you want.
Heptio is working with Microsoft to bring Ark disaster recovery and portability to Microsoft Azure.
Ark seems to be a shot worth to look into for disaster recovery of Kubernetes (and applications in Kubernetes?).

“Enterprise is.. complicated” because there is no real one part for things like Ingress, Logging, Monitoring, CI/CD, Security and Stateful services that everyone uses right now.
Additionally the politics and “rules” in each company are always different, and at least for a platform like Kubernetes a formalization of extensibility is good to do.

Talk: Extending Kubernetes 101 [A] - Travis Nielsen, Quantum Corp

Kubernetes resources are declarative. You want a namespace, you get a namespace.
When wanting to declare your own resources, Custom Resources (CRDs) are what you want to do. To note for CRDs they use the same pattern as the built-in resources.

You declare it and the “owner” in the Kubernetes cluster of the CRD makes it happen.
The traditional approach is to create a REST API “outside” of Kubernetes. This isn’t a good way to run something “Kubernetes-native”.
The CRD objects are as simply edited as every other object in Kubernetes.
Using Kubernetes objects directly is way better for Devs and Ops to use, as it streamlines the way to use something in the “Kubernetes way” (manifests).

When developing CRDs you should design the CRD properties. Additionally in Kubernetes 1.8+ there is code generation available to make resources available to clients to use/consume better.

The flow of an operator is like:

  1. Design the CRD properties.
  2. Register the CRD at the API server.
  3. Run a watch loop on your CRD.

An example CRD:

KubeCon: Talk Extend Kubernetes example CRD

It can has any kind of structure as long as it can be done with a YAML file.
The code is available at GitHub rook/operator-kit /sample-operator.

KubeCon: Talk Extend Kubernetes example CRD Go code structure

In Go to pickup the CRD for code generation annotations need to be added.
Additionally to be able to list each CRD, an CRD structure with an “array” of type of the CRD should be also created.

An operator normally reacts to onAdd, onUpdate and onDelete events during the watch on the CRDs.
From Kubernetes point of view, an operator should “include” the code generated by the CRD generation tool.

An operator-kit is useful to not write the same code for handling CRDs over and over again.

Rook was named as example:

KubeCon: Talk Extend Kubernetes Rook.io project

Talk: Building Helm Charts From the Ground Up: An Introduction to Kubernetes [I] - Amy Chen, Heptio

Helm is kinda a “version control system” for Kubernetes manifests.
Instead of “normal” Kubernetes only allows to rollback certain type of object, like Deployments.
Helm allows to rollback more than that through versioning of manifest “bundles”.

It even more simplifies installing, upgrading of objects in Kubernetes that have been created through Helm.

Food: Tacos by RedHat

This is my mandatory food shot.
Thanks for the Tacos Redhat!

KubeCon: Food Tacos from RedHat

Talk: Vault and Secret Management in Kubernetes [I] - Armon Dadgar, HashiCorp

Requests Credentials -> Vault -> Create Credentials in Database
How Dynamic secrets should be handled in Vault:

KubeCon: Talk Vault Management Kubernetes Dynamic secrets

For createing credentials there are plugins available that are separate from the core.
These plugins are available for multiple applications like MySQL, SSH and more. This is kinda more of a community thing to create more and more integrations/plugins for applications.
Highly available is done by active/standby model. THe standby instances proxy the requests to the active instance.

Entity a reprensentation of a signle person, consistent across logins.
Alias mapping between an entity and auth backend.
Identity Groups allow a team sharing like structure, example: Engineering -> Team 1 | Team 2

Vault Kubernnetes Auth Backend for Service Accounts, see https://www.vaultproject.io/docs/auth/Kubernetes.html

KubeCon: Talk Vault Management Kubernetes Auth Backend

Day #3

TODO

Keynotes Morning

KubeCon: Keynote Day #3 Shot #1

Keynote: Opening Remarks + Keynote: Kubernetes Community - Sarah Novotny, Head of Open Source Strategy, Google Cloud Platform, Google

Snow yesterday.
Large event, is it still the community I love?
Who has the power? Who to ask? Steering Committee!

“Culture eats strategy for breakfast.” - Ptere Drucker
From 400 two years ago to this year huge increase.
“Distribution > Centralization”

KubeCon: Keynote Opening Remarks #1

“Inclusive > Exclusive”

KubeCon: Keynote Opening Remarks #2

It’s a core piece.

“Community > Company”

KubeCon: Keynote Opening Remarks #3

A culture of contious learning.

“Improvement > Stagnation”

KubeCon: Keynote Opening Remarks #4

bit.ly/k8smentoring

“Automation > Procecss”
“Manual labor” tasks, chop wood, carry water award.

KubeCon: Keynote Opening Remarks #5

Love Kubernetes to shape the industry.

KubeCon: Keynote Love Kubernetes, shape the industry

Keynote: Kubernetes at GitHub - Jesse Newland, Principal Site Reliability Engineer, GitHub

KubeCon: Keynote Day #3 Shot #2

GitHub runs on Kubernetes in production.
At the time (4 years ago) it, seniors at GitHub said wait for it. “We’re not the right spot to do the execution”
They were really not in the right place + time to do that.

KubeCon: Keynote GitHub Love Kubernetes

THeey already have 20% services run on Kubernetes.
GitHub.com and the GitHub API are already powered by Kubernetes (at least the stateless part of it).

KubeCon: Keynote GitHub over 20% services on Kubernetes

No real experience operating Kubernetes clusters.
They use multiple clusters in different sites. To deploy to their multiple sites they simply use their existing deployment tools.
They use small “blade” like servers.

KubeCon: Keynote GitHub blade servers

And their clusters look like this:

KubeCon: Keynote GitHub Cluster configuration

They can deploy by chat, ChatOPS. They also use Puppet for deploying to Kubernetes. Hubot used.
They use consul to “route” the traffic.

KubeCon: Keynote GitHub consul service router

They hope to replace this with Enovy Project.
GLB (GitHub LoadBalancer), using with NodePort on all nodes.

They use some awesome tools like:

KubeCon: Keynote GitHub Awesome tool

Workflow Conventions

KubeCon: Keynote GitHub Lab Branch Workflow

KubeCon: Keynote GitHub Lab workflow

They get a “lab” URL for every PR, the Namespace with the application is deleted after 24 hours.
Canary deployments to a small percentage of traffic.
All services of GitHub now also use the canary platform.

KubeCon: Keynote GitHub Canary deploy

The plan for GitHub with Kubernetes is to move more and more workload to Kubernetes.
They also want to get persistent volumes for their engineers.

KubeCon: Keynote GitHub Kubernetes made it easier for SREs

KubeCon: Keynote GitHub Decomposition of monolith services

Currently most projects aren’t Open Source yet, but they will begin to change that.

“It is the least we could do for the community”

Keynote: Manage the App on Kubernetes - Brandon Philips, CTO, CoreOS

The goal is to make apps easier to build, deploy, update and everything.
There is demand for running more and more apps on Kubernetes, but you are more and more accountable for the apps.

KubeCon: Keynote CoreOS Pre-Kubernetes State for Applications

Who is the owner? Where are the dashboards? Metrics and SLAs? Docs?
An eco system to have a catalog of the app, with all information with it.

KubeCon: Keynote CoreOS Future Kubernetes State for Applications

Move the information into metadata of the app (in Kubernetes too).
They showed of Tectonic console which already does some managed for applications like Vault, ETCD.
This is partly a vision right now. It is all managed through objects in Kubernetes.

Keynote: What’s Next? Getting Excited about Kubernetes in 2018 - Clayton Coleman, Architect, Kubernetes and OpenShift, Red Hat

How can we make Kubernetes exciting.
We, the people, should help make everything better.

KubeCon: Keynote Suprise we already are!

But.. if you haven’t been engaged, you should go out of your comfort zone and work with people for projects you haven’t worked with yet.

  1. Build faster, smarter, better: Set it Open Source, like Ruby on Rails, they have set the tone for creating and sharing (Gems) code. Sharing now enables everyone to build a service mesh.
    • 2018 Year of the Service Mesh, Isitio, Envoy, Conduit will help solve this problem/use case. Machine learning is/will be more and more important.
    • Serverless? fission, Kubeless, Apache OpenWhisk are some examples for projects that implement serverless.
    • Defining Apps: Applications are getting more complex over the years. Helm, kompose, kedge, and other applications try helping with making it simpler for complex applications to “exist”.
  2. Security and Authentication
    • Spiffe, Istio and Kerberos for securing workload identity. The working group contaiiner-identity-wg is working on improving that.
    • Multi-tenancy, Policy through open policy agent (OPA), LDAP, <INSERT POLICY HERE>.
    • Better containers (and VMs!): cri-o, containerd, oci, kubevirt, hyperv, debug containers (for debugging in production as an example).
    • github.com/appscode/kubed, github.com/heptio/ark github.com/cloudnativelabs/kube-router, github.com/GoogleCloudPlatform/kube-metacontroller

How do we participate? Build something. Use anything. Help anyone. Talk to people. Create something new!

(Don’t reinvent everything)

Keynote: What is Kubernetes? - Brian Grant, Principal Engineer, Google

Kubernetes was launched at OSCON. Kubernetes was the most fun for him. Can confirm!

KubeCon: Keynote Is Kubernetes a...

THe presentation is available by a link on Twitter.
Pretty good look at the more of Kubernetes technical side.

KubeCon: Keynote Google CaaS

KubeCon: Keynote Google Controllers

KubeCon: Keynote Google SaaS

Again rook.io was named as an example for API extension (through CRDs).

KubeCon: Keynote Google Kubernetes Extension Examples

“As long as it implements the API, it is/should compatible”
Kubernetes can be used for automating management, Service Catalog APIs.
Kubernetes is kind of a portable cloud abstraction.
Kubernetes shouldn’t do everything you want. You can build tools to do that on Kubernetes, like with CRDs. SDKs are under development to make creating APIs like that simpler and easier.

Conclusion is that Kubernetes is awesome!

KubeCon: Keynote Google Conclusion of Talk

Talk: Disaster Recovery for your Kubernetes Clusters [I] - Andy Goldstein & Steve Kriss, Heptio

The problem is stateful data. Stateful data already begins with the etcd for Kubernetes and goes to the PersistentVolumes running inside Kubernetes.
For etcd you can do from block, file, etcdctl or Kubernetes API discovery.

For Persistent Volumes, nothing really except for “clouds” using a snapshot API. There is on going work for a snapshot/backup API in Kubernetes in some way.

Heptio Ark allows to do restore of API objects in Kubernetes. Additionally it can be able to backup and restore PersistentVolumes.
When using the Kubernetes API Discovery it discovers all APIs available in the Kubernetes API server asked.
Other features of Heptio Ark are:

KubeCon: Talk Heptio Ark DR Features

KubeCon: Talk Heptio Ark DR Features

The features can be extended through hooks (example script before a backup is done) and plugins.
There are different types of plugins.

In the demo they used Rook block storage and showed how to backup and restore it.
It is as simple as running a command. When for example deleting a namespace with a PersistentVolumeClaim, it allows to fully recover the Kubernetes objects (new IDs are generated though) of the PersistentVolume and the PersistentVolume itself too.
Ark is totally open source. They have a Channel in the Kubernetes Slack #ark-dr.

Talk: One Chart to Rule Them All: Continuous Deployment with Helm at Ticketmaster - Michael Goodness & Raphael Deem, Ticketmaster

No LOTR memes, even when the title suggests it.
There is no native way to bundle every “dependent” resource together, to make creating/updating/deleting simpler.
Helm is like a package manager for applications on Kubernetes.
When you have a lot of clusters and multiple stages a tool like Helm makes it easier for teams to package their applications.
They also use a tool to create namespaces, which adds some metadata to it, RBAC, resource quotas and network policy.

KubeCon: Talk One Chart - Versions

I personally prefer a mix of both, as some teams just want to have some stuff like they want and that is just how it is right now that some teams like it their own way and taking that away wouldn’t be that good.
Looking a Springboot applications v2 is the better approach, so it alsow depends on what you want to deploy.
In v2 having a lot of knobs and dials, can be considered good depending on how experienced the team(s) are.

Knobs for stuff like tracing, logging and other sidecar/features is nice to have in the templates.
But covering everything is never possible so only the most important Functions should be “exposed” as values in the values.yaml.

KubeCon: Talk One Chart - Chart Shot #1

KubeCon: Talk One Chart - Chart Shot #2

KubeCon: Talk One Chart - Chart Shot #3

They have a pretty awesome (and complex) looking Helm Chart “Template” for the so called “Webservice” (chart).

Talk: Economics of using Local Storage Attached to VMs on Cloud Providers [I] - Pavel Snagovsky, Quantum

<img src=”IMG_20171208_154415.jpg” alt=”KubeCon: Talk Local Storage Shot #1”

Locally attached storage is faster and most of the time pretty inexpensive.

KubeCon: Talk Local Storage Pros & Cons

Using an application like rook to utilize that storage would be a good possibility to use that fast storage overall for the Kubernetes, in the cluster.

KubeCon: Talk Local Storage Harnessing the benefits with rook

Rook solves the problem of the local storage going to waste and instead as written earlier use for the Kubernetes, in the cluster.
The performance is in some cases better with Rook instead of using the block storage of your cloud provider of choice.
For AWS:

KubeCon: Talk Local Storage AWS performance comparison

Another advantage when taking a look at using Kubernetes in multiple cloud environments, is that rook is simply compatible with Kubernetes and has only a small amount of requiremenets to the system it runs on.

KubeCon: Talk Local Storage Rook Compatibility

KubeCon: Talk Local Storage AWS Local Storage costs

KubeCon: Talk Local Storage Costs Summary

Use casses for rook are:

KubeCon: Talk Local Storage Conclusion

Rook is awesome! But keep in mind that it, as always, dependent on if it really fits your use case.
Depending on what your use case especially in the cloud but even on bare metal, you could save money for certain use cases.

“Find A Job | Post A Job | Get Involved” Wall

KubeCon: Sponsor Showcase Find A Job | Post A Job | Get Involved Wall

If you need a bit more pixels, I have the bit more pixel photo of the wall available, please just send me an email or comment on this page. Thanks!

Summary

TODO

Have Fun!