Our Way of Using New Technologies
We aim to deliver our products using the freshest code, and are always seeking newer and more innovative approaches in development.
Kubernetes doesn’t solve all the problems. It really depends on how you manage complex infrastructures.
As collaboration between the dev and ops part of the IT world becomes more important, we try to break the wall between the former and the latter. GitOps helps with that as it provides complete visibility on what is happening in the infrastructure alongside bringing familiar tools like git in day-to-day work for both sides. In our case, the technology we are using to achieve that is ArgoCD.
Besides all notable features that ArgoCD brings, we love multi-tenancy support with SSO.
We have numerous teams which deploy numerous projects to Kubernetes clusters, so we're excited about awesome multi-tenancy support with SSO which allows teams to manage and see by using a nice UI how their precious software looks right in the cluster without giving them direct access to Kubernetes clusters.
Our Gitlab monster started a few years ago as proof-of-concept.
Eventually it became an indispensable part of our workflow. We utilize a variety of standard features: store our code, docker images, and packages, and run CI/CD pipelines. GitLab is a rapidly expanding project, and we try to keep up with innovation. We use lots of helper-features to improve our developers’ lives, such as dependency proxy, review environments, package, and container registries. It’s grown to be an all-in-one solution and it works great!
Amazon Web Services
AWS Cloud helps us innovate over and over, giving us more possibilities every year.
We use a lot of cloud services, because it keeps us from re-inventing the wheel and focuses our development on creating great products. Here are just a few examples that make up our stack.
Built with Amazon Web Services, our data warehouse solution is adaptive and fully automated.
Several databases cover our needs: we use relational databases for hot data storage, with data arriving within a few minutes after appearing at the data source, columnar database for regular storage and distributed object service (S3) for keeping our data lake always available and scalable for Big Data. Every data storage is universally available, monitored, backed up and scaled with AWS.