REQUIREMENTS:
• 2y+ Python experience
• comfortable with Linux
• solid debugging skills
• know good practices and coding standards
• have hands on approach, being proactive (do not wait
to improve functionality and enhance productivity until something is completely broken)
• understand API communication patterns (HTTP/REST/RPC)
NICE-TO-HAVE:
• Django/Celery/Django Rest Framework is a plus
• understanding of micro-services architecture in data processing pipelines is a plus
• Kubernetes/Docker/AWS knowledge is a plus
• experience with any of Druid.io/ElasticSearch/RabbitMQ/Cassandra/Redis is a plus
TECH STACK:
• python/django, javascript/react
• AWS/kubernetes
• ElasticSearch, druid.io, Cassandra, RabbitMQ, Redis, Kafka
• Airflow, Spark, EMR
DAY-TO-DAY WORK:
• develop processing pipeline for data analytics
• interact with multi-terabyte scale databases like Druid.io, ElasticSearch, Cassandra, S3
• develop applications using microservice approach
• design distributed apps using Kafka, Airflow, Kubernetes, Spark, AWS
• design proposals to optimize the system, make it more effective and cost efficient
• put your own ideas into real world products (we are still small team and everybody contribution is essential and welcomed!)
• write quality code - yes, shipping is most important, but at the end it's your name on it, let's make it right way
WE OFFER:
• focus on product (no project switching every 3 months) - we believe such approach benefit both sides the most, where you can fully focus on specific tech stack, specific product, specific problems
• -real- big data projects
• hands-on an all solutions and decisions within the team
• your ideas going to production
• small, independent team working environment (no crazy meetings, no big management structure, the team make rules and ship the software)
• fully remotely or in office
• private healthcare, multikafeteria, multisport
• trainings
• flexible work time