1 - Consume unmodified upstream
The time for vendors to proclaim that they are able to somehow make open source projects "enterprise-ready" by releasing a "hardened" version of infrastructure software based on those upstream projects is over. OpenStack has been stable for multiple releases now and capable of addressing even the most advanced use cases and workloads without any vendor interference at all. See CERN. See AT&T.
I believe this is the most important rule of them all because it is self-limiting to not follow it: why would you restrict the number of people able to work on, support and innovate with your platform in production by introducing downstream patches? The whole point of open infrastructure is to be able to engage with the larger community for support and to create a common basis for hiring, training and innovating on your next-generation infrastructure platform.

Multicloud strategy
Your cloud strategy is multicloud if you aim to combine the best of more than one cloud service, from more than one cloud vendor, public or private. This strategy helps you avoid the pitfalls of single vendor reliance. Spreading workloads across multiple cloud vendors gives you flexibility to use (or stop using) a cloud whenever you want.
There's nothing evil about having multiple clouds, in fact, it’s a good thing.

Kubernetes is an orchestration system for automating the deployment, scaling and management of applications in containers. It was originally designed by Google, and is now maintained by the Cloud Native Computing Foundation. It aims to move applications easily across clusters of hosts, independent from the cloud platform or container technology that is used, including Docker.
What problem do containers solve?
Think of moving a new application component from the developer's laptop to a test environment, from a staging environment into production, and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud. If you are used to using virtual machines, you know that no matter what hypervisor you choose, when you move a virtual machine from one computing environment to another, stranger things will happen. The reason being that the supporting software environments are not completely identical. And it’s not just different software that can cause problems, for example the network topology, security policies, the storage solution, basicly every component in the infrastructure, might be different and therefore cause an issue with the application. Basicly containers are a solution to this problem. How to get software to run reliably when moved from one computing environment to another.
- Thymos joins the RedHat Accelerator program (English)
- The twelve factors for building cloud-native software-as-a-service (English)
- Kubernetes, containers, cloud native: The basics (English)
- Agile Scrum methodiek voorhoogt uw klanttevredenheid en omzet
- Thymos introduces three open source service models (English)
- Six open source projects that will help your business and revenue grow (English)
- Public and private cloud comparison, how to choose a cloud platform and when to use both (English)
- Working Agile and responsive within and on your cloud (English)
- Ten reasons why open infrastructure cloud is the way to success (English)
- Thymos Cloud Engineering is NEN 4400-1 gecertificeerd!