No internet connection
  1. Home
  2. Ideas

Deployment - microservices for Kubernetes

By Trentend Tricky @tetricky
    2021-11-08 21:30:56.489Z

    My intention really was to introduce myself, and say some things about forum software, and nice things about ty. There is perhaps not an appropriate place to do that. So probably I shouldn't.

    I do think, for modern forward looking multi-use forum/integrated with services (which I have reason to believe ty is, and is amongst the nice things I might have said), that a scalable deployment method - by which I mean k8s, would be both useful, and lead to more widespread adoption. Which might in turn lead to a business model allowing for more resilience in the development team. Which I see as a potential weakness - though observe that almost a decade of development on this has led to a superior product that has outlasted many other promising projects, and seen off many commercial product offerings in the field.

    Some background. I have run a small community online, since the late nineties. Ostensibly a specific special interest community, it has evolved into a wider support and collaboration community. Based entirely around a forum (various software, laterly SMF and now elkarte - purely to get timely maintenance updates).

    Originally I ruled out talkyard as our next step, in part due to complexity of deployment ( and possibly maintenance). But having tested every other viable option, alongside integrating other tools to provide the collaboration we require ( things like seafile, xmpp, hedgedoc, Hugo - with drone and gitea for gitops static blog updates), the conclusion that I reached is that I can solve many problems by deploying talkyard.

    I have considered creating a collaborative environment around xmpp/mov.im and other tools... but having run a community for a long time appreciate the easily accessible multi-timezone static forum. I started back in the old text BBS days. Old habits die hard.

    I am aware of the previous creation of a helm chart... but I am going to try something a bit more root and branch - stripping out things like postgres, redis, and seeing if I can have multiple replicas scaling on load while working on a HA backed RWX storage backend (longhorn).

    I probably don't have the up to date coding background, but i have some mucking about with k8s from a deployment perspective. I use from single node k3s, up to a multi-master HA cluster (3 masters, five nodes), with a kilo CNI, and >5TB storage.

    Deploying to this wide range of server targets is really where my interest lies (this will be in spare time, so not quick, if at all).

    Happy to discuss aspects of this or my experience/opinions. Equally happy to just see how I get on. I'm not expecting any assistance.

    Great software. Talkyard gives compelling reasons to stay with the forum format, that no other offering has the scope for. It matches the key objectives that I have. Although, not best of breed in every aspect, it's substantially ahead of any competitor, overall.

    • 6 replies
    1. Hello Tetricky!

      Happy to discuss aspects of this

      Ok :- )   About microservices: Be prepared for Talkyard adding & needing more containers, "suddenly".

      For example, some time "soon" (maybe next year) there'll likely be a Docker container running Deno and compiling CommonMark to HTML. (Maybe you can think of this as a microservice, although the underlying motivations are different.)

      And then it's good for you, if somehow the Kubernetes configuration you have, can easily (automatically?) be updated and add these new containers.

      Also, sometimes there're security problems, and it'd be good (I think) if the Kubernetes stuff could somehow auto apply security fixes.

      With Docker-Compose, Talkyard will some time next year be able to not only upgrade itself, but also automatically download new images (that weren't present at all before, any version), as needed. It'll do this, by getting a new docker-compose.yml file with new image version tags (or hashes), and then any new Docker images and containers therein, would automatically get downloaded and stared, by DockerCompose.

      I don't know if such a thing is possible with Kubernetes?

      laterly SMF and now elkarte - purely to get timely maintenance updates

      Hadn't heard about Elkarte. (SMF and Elkarte has automatic software upgrades?)

      k8s, would be both useful, and lead to more widespread adoption

      Not sure about more widespread adoption :- ) When reading at HackerNews, I'm getting the impression that K8s takes a while to learn, and for most organizations, just installing Docker-Compose, or maybe installing nothing, using systemd instead, is simpler.

      multi-master HA cluster (3 masters, five nodes)

      To me that sounds like many. You run these in VPS:es in AWS or Google Cloud or sth like that? Or on bare metal, ... maybe on-premise at a workplace?

      I have run a small community online

      I'm getting curious about what it is about. If you want, you could share a link?

      mov.im, is that https://github.com/movim/movim?

      this will be in spare time, so not quick, if at all

      Ok, maybe that's a bit good — there'll be a migration to Postgres 14, some time next year. And if you're slow enough, you won't need to do the migration :- )

      1. TTrentend Tricky @tetricky
          2021-11-10 22:53:33.671Z

          For example, some time "soon" (maybe next year) there'll likely be a Docker container running Deno and compiling CommonMark to HTML. (Maybe you can think of this as a microservice, although the underlying motivations are different.)

          Deployment can be through a number of methods, but ultimately comes down to a deployment manifest in yaml (that can be packaged with a tool like helm, which provides values - again in yaml, that define the deployment). There is essentially a sliding scale of modularisation.

          1. At the macro end you might have an image that is packaged with all of the services to run an application within it. So this might have a postgresql service, a redis service, etc., providing a full installation. This image is somewhat inflexible - because all of the services required for the application are within one image. You can't independently scale, or easily re-use components elsewhere, because they are contained within one container. There is little gain to be had in this sort of scheme, beyond being able to deploy the application to this sort of environment, along with other workloads - though the separation this provides has some minor advantages (you may have an application that needs alpine, something that runs on ubuntu 20.04, and so on).

          2. You might have, on the other hand, an application image, a separate postgresql image, a redis image, etc., used to create containers, where these can communicate with each other using service ports, or indeed by writing to persistent storage with mount points that can be accessed by the different containers. You might only have one application container, postgresql installed as a HA cluster across multiple servers, a resilient HA storage backend. When the load on the application requires further replicas to meet demand this can then auto-scale, independently of the other components, and vica versa. I am envisaging that this sort of thing might be the sort of scheme that talkyard may fit into (notwithstanding that my level of understanding of the moving parts and inter-dependencies at this stage is almost non-existent).

          3. It is possible to write true low level microservices at the function level. Here you might use a framework like openfaas ( https://www.openfaas.com/ ). With this you can code scaleable functions, that can form the application and services. I am not proposing this level of abstraction, it's not practical or desireable (you would reasonably want to retain your existing deployment options, and it would be idiotic levels of work for every point update). In this regard my use of 'microservices' in the title is wrong. Services might be better, as some might be quite 'big'.

          And then it's good for you, if somehow the Kubernetes configuration you have, can easily (automatically?) be updated and add these new containers.

          Also, sometimes there're security problems, and it'd be good (I think) if the Kubernetes stuff could somehow auto apply security fixes.

          Depending on how you define the deployment, you can control various levels of updated images. This requires the images to be built first, pushed to a hub, and correctly tagged. This allows, and there are various ways of triggering a new image pull, depending on the deployment manifest and how upgrades are triggered - from fully automatic/periodic to manual. If we consider an image "talkyard" then each build (on a per release basis) would be tagged. an image can be tagged "latest", or a release series (eg something like "5" ), or a point release ( "5.4"). If you build and push a new image (say "5.5") then a deployment using images tagged latest would pull the new image, as would one tagged "5", but a deployment tagged "5.4" would not. If you then built and pushed an image for a new series "6.0", then only a deployment using the "latest" tag would upgrade the image.

          So in this way you can manage avoiding upgrades with breaking changes, but still achieve maintenance updates (point upgrades).

          When going from one release series to the next, you might have to do things like upgrade database schema, or some such. In a container environment you should achieve this by pushing a new image - which might include a test for (say) the current status and run an upgrade script if necessary. It may be necessary to scale the application containers down to one replica, then upgrade, pulling a new image (so that multiple containers are not trying to update the same data, which may go badly).

          Where the structure of the required containers changes on new release, there might be a new deployment manifest (or helm chart) that reflects this new structure, and tests for and runs an upgrade on the data to reflect the new required structure.

          It is possible run commands/processes from an image, but this is bad practice. If the image itself changes, unless it is pushed as a new image, and the new image used, then the changes are not persistent and may cause problems.

          With Docker-Compose, Talkyard will some time next year be able to not only upgrade itself, but also automatically download new images (that weren't present at all before, any version), as needed. It'll do this, by getting a new docker-compose.yml file with new image version tags (or hashes), and then any new Docker images and containers therein, would automatically get downloaded and stared, by DockerCompose.

          I don't know if such a thing is possible with Kubernetes?

          Yes. A deployment manifest in kubernetes can handle images in the same way as docker-compose. It can also do lots of other things like create namespaces, define services, ports and ingress (how a deployment is seen from outside the cluster), manage certificates, and things like auth access to services. Generally kubernetes is harder to set up as an infrastructure, but it handles HA, scaling of services, migrating workloads, and all the moving and changing of things, in much less problematic ways than docker, once you progress past single instances. I still use (for historical reasons, no new deployments), in places, docker (mostly with docker-compose), and I did some work with docker-swarm for a short while, but my broad observation was that k3s was not much harder than docker-swarm, and ultimately offered much better scale out options. I am currently looking at moving my docker stuff to podman, which is becoming increasingly competent.

          Docker (podman) offers some ease of use, but against that you have (for example) things like longhorn ( https://longhorn.io/ ) which provides a replicated cross-cluster HA storage backend with backups to s3, which takes resilience and backup from an application consideration to the infrastructure level, multi-master clusters, where infrastructure upgrades can be managed without downtime.

          Hadn't heard about Elkarte. (SMF and Elkarte has automatic software upgrades?)

          No, they don't. SMF appeared to be dying, and elkarte was a temporary measure to get some maintenance and functional updates not available in a timely fashion through SMF. It's a development of SMF and was easy to import my existing forum to. I was considering flarum for a period, when it was long in beta, but I don't much like it's stack (php, Mysql). Talkyard, functionally and technologically, feels a much more attractive option.

          Not sure about more widespread adoption :- ) When reading at HackerNews, I'm getting the impression that K8s takes a while to learn, and for most organizations, just installing Docker-Compose, or maybe installing nothing, using systemd instead, is simpler.

          k8s definitely takes a lot more learning. k3s less so. Much new deployment is to cloud managed k8s platforms (all the hosting providers have an offering). There are offerings like civo ( https://www.civo.com/ ) starting to enter the space (a k3s based managed budget cloud hosting platform). I have managed applications and services on bare metal since the nineties, laterly vm's, and increasingly containerized workloads. Right now I am building an infrastructure, very soon most people wont. They will just run a deployment on a managed providers service (by clicking an icon in a catalogue), which will run a helm-style install, and present them with an application or service ready to go (this already exists - look at things like rancher catalogue). My view would be that things are increasingly moving towards the containerized, ease of deployment, managed services. Managing, maintaining and upgrading bare metal servers less so. This is definitely 'harder' and with an overhead versus the simplicity of a single server..but such is the way of the world.

          To me that sounds like many. You run these in VPS:es in AWS or Google Cloud or sth like that? Or on bare metal, ... maybe on-premise at a workplace?

          That particular cluster is in a scaleaway/dedibox datacentre in Paris (although I picked up the servers through a value reseller). It's individual servers (not in the same subnet) running using a wireguard CNI control plane to link them together ( https://github.com/squat/kilo ). I also have bare metal clusters in various places, and even have some single node k3s 'clusters' running services in customers premises (losing almost all of the scale advantages of kubernetes - but critically allowing exactly the same deployment as larger clusters).

          I'm getting curious about what it is about. If you want, you could share a link?

          https://talkback.trentend.co.uk

          mov.im, is that https://github.com/movim/movim?

          Correct. I like xmpp. Not everyone feels the same.

          Ok, maybe that's a bit good — there'll be a migration to Postgres 14, some time next year. And if you're slow enough, you won't need to do the migration :- )

          Because of the separation offered by containerization, we can install different versions of postgres in different namespaces, and different versions of the application in different namespaces, and we can choose which service by referring the application to the database service that we require (a yaml svc declaration)...so this is not a problem.

          ..but yes, I'm very slow (other commitments) and it will be 14 by then.

          Maybe a bit odd, but things like that can be good for me to know, so feel free to share your thoughts, if you want :- ) both positive and negative

          I very much like the level of facility for community development - forum,questions, blog comments. I like the use of markdown, and the resizeable editing window. I like the modern interface and the collapsible side windows.

          I have typed a reply, and previewed it, and then clicked elsewhere (chat, I think), and not been able to obviously return to my draft (okay, i created a new reply, and copy and pasted from the preview, and re-formated it as formatting options were lost, then i posted the reply....then I refreshed the page and it offered me the chance to resume editing the previous draft. Now I know. Refresh restores the editing option if i have done something else) which is there tantalisingly in preview, but not obviously available to edit. I might prefer an auto-scroll (particularly for mobile) when reaching the bottom of a forum topic/category list. I'm not entirely sure, at this point, whether the admin options and configuration options are as full as I might like (the naming of levels, and things like promotion of users, for example).

          On the whole, I find talkyard good looking, simple to navigate, and pleasant to use, and extremely heavily functioned, that well fits my intended use case.

          1. Hi! Sorry for the late reply. I'm adding a new feature that makes it simpler to reply to long replies :- )   o.O

            Namely an editor layout where the editor is to the left, in its own "column", from top to botom. And the text I'm replying to, to the right. Then I can see almost all of the text I'm replying to (or almost all of the preview).

        • In reply totetricky:

          say [...] nice things about ty [ ... ] not best of breed in every aspect

          Maybe a bit odd, but things like that can be good for me to know, so feel free to share your thoughts, if you want :- ) both positive and negative

          Then I'll get some feedback about what to do more of, and what to do less of / things-to-fix/improve. Even if maybe there's not much time right now, can still be something to keep in the back of my (and others') mind(s), in the future

          1. TTrentend Tricky @tetricky
              2021-11-11 07:36:46.044Z

              Some user level things, to meet modern expectations of interacting with a forum/discussion. For example, push notifications of new posts, swipe to refresh on android, auto-scroll when reaching the bottom of topic list.

              1. Ok, thanks :- )   (I'll reply to the other comment above later this week)