Technologies We Use

Here’s some of the secret sauce that powers our products:


C, C++, CSS, Docker, ElasticSearch, GoLang, GraphQL, HTML, Java, JavaScript, MariaDB, Memcached, Mongo, LESS, SASS, TypeScript, PHP, PostgreSQL, Python, Redis, Node.JS

Frameworks / Libraries

Angular, Jest, NestJS, Next.js, Node.js, React, Redux, Symfony, Vue.js


AWS, Digital Ocean, GCP, GitLab, ELK, Graylog, Open Search, Sentry, Kubernetes, ArgoCD, Istio, Grafana, Loki, Prometheus, Thanos

Tech Culture & Approaches

Sharing and collaboration

Share your ideas, thoughts, and code with colleagues, share with open source.

Blameless problem solving

Blaming someone for their mistakes brings more mistakes, solving problems together brings results.

Code review

Everything gets peer reviewed, it facilitates knowledge sharing and better relationships.

Infrastructure as a code

When infrastructure is managed as a code, it gets revised as well. Having more control over what we run in production is great for stability.


We automate rigorously everything that’s repetitive and time consuming, our people should be focused on creative tasks, not repetitive ones.

Embrace change

Everything changes, and we love it, because it’s an opportunity to make things better.

Our Way of Using New Technologies

We aim to deliver our products using the freshest code, and are always seeking newer and more innovative approaches in development.


Kubernetes doesn’t solve all the problems. It really depends on how you manage complex infrastructures.

As collaboration between the dev and ops part of the IT world becomes more important, we try to break the wall between the former and the latter. GitOps helps with that as it provides complete visibility on what is happening in the infrastructure alongside bringing familiar tools like git in day-to-day work for both sides. In our case, the technology we are using to achieve that is ArgoCD.


Besides all notable features that ArgoCD brings, we love multi-tenancy support with SSO.

We have numerous teams which deploy numerous projects to Kubernetes clusters, so we're excited about awesome multi-tenancy support with SSO which allows teams to manage and see by using a nice UI how their precious software looks right in the cluster without giving them direct access to Kubernetes clusters.


Our Gitlab monster started a few years ago as proof-of-concept.

Eventually it became an indispensable part of our workflow. We utilize a variety of standard features: store our code, docker images, and packages, and run CI/CD pipelines. GitLab is a rapidly expanding project, and we try to keep up with innovation. We use lots of helper-features to improve our developers’ lives, such as dependency proxy, review environments, package, and container registries. It’s grown to be an all-in-one solution and it works great!

Amazon Web Services

AWS Cloud helps us innovate over and over, giving us more possibilities every year.

We use a lot of cloud services, because it keeps us from re-inventing the wheel and focuses our development on creating great products. Here are just a few examples that make up our stack.


EC2, Route53, CloudFront, EKS, RDS

We run a decent farm on EC2 instances and manage that with automation


RedShift, RDS, Kinesis, SES, Lambda, S3, AWS Glue

Our analytics solution is scalable and adaptive to the customer needs, mostly thanks to the automation atop of AWS.

Customer Interactions

Pinpoint, SES, Lex, Amazon Connect

Our focus on customer experience, combined with customer-centric AWS helps us bring more value to our customers.


Lambda, API Gateway, Cognito, DynamoDB

We embrace innovation, which is why our trickier apps are made with serverless.


CloudWatch Logs, CloudWatch Metrics

Automation comes at a cost, which is careful and responsible monitoring. We utilize CloudWatch to make sure our services are running at their best.

Data Warehouse

Built with Amazon Web Services, our data warehouse solution is adaptive and fully automated.

Several databases cover our needs: we use relational databases for hot data storage, with data arriving within a few minutes after appearing at the data source, columnar database for regular storage and distributed object service (S3) for keeping our data lake always available and scalable for Big Data. Every data storage is universally available, monitored, backed up and scaled with AWS.


Refining Incident Management with Metrics and Looking to the Future

Tue Feb 20, 2024

Streamlining and Implementing Incident Management at Dyninno

Fri Feb 9, 2024

Want to be our next success story?
Apply now!

Let us know if you are interested! We will contact you and maybe soon you will be here at our office, drinking great coffee!