Job Opportunities

Data Engineer

  • Job ID: 2024-03
  • Location: Madrid, Spain
  • Job Type: Full time

Nommon is a research-intensive technology company that provides decision support solutions for the transport and mobility market and other sectors where geospatial information plays a key role, such as smart cities & urban planning, retail & geomarketing, and energy & environment. Nommon’s solution portfolio includes two types of products:

  • Data products: over the past decade, Nommon has been a pioneer in the application of big data and AI to population behaviour analysis. We have developed a variety of cutting-edge solutions that mine and analyse anonymised geolocation data from mobile devices and blend them with other data provided by our clients or available from public databases to provide high-quality trip matrices, footfall indicators, and other actionable information on people’s activity and mobility patterns.
  • Software products: in recent years, Nommon has extended its product portfolio with different SaaS solutions that combine mobility data, predictive models and optimisation techniques to provide our clients with decision support systems that help them analyse alternative strategies and management actions in complex and uncertain environments.

As part of our expansion plans, we are looking for a Data Engineer to join our Technologies and Software Engineering department in Madrid.

Job description

You will work alongside an international, multidisciplinary team of talented researchers, product engineers, developers and consultants to deploy Nommon’s data acquisition architecture. Responsibilities include:

  • Develop, deploy, test and maintain the pipelines required for the extraction, transformation, and loading of data from a wide variety of data sources. Main technologies: Python, PyUnit, Pandas, Cython, Spark, Airflow, Mongo.
  • Improve data reliability, efficiency and quality by developing and implementing machine learning and statistical methods. Main technologies: scikit-learn, Spark ML and TensorFlow.
  • Identify, design, and implement internal process improvements: automating manual processes, optimising data delivery, re-designing architecture for greater scalability, etc.
  • Work with internal and external stakeholders to assist them with data-related technical issues.
  • Contribute to the definition of innovative products, services and business models.

Qualifications and skills

Required

  • MSc in Computer Science or any other relevant engineering field.
  • Proficiency in one or more of the following programming languages: Python, Java, C/C++.
  • Experience with one or more of the following big data frameworks: Spark, Hadoop, Kafka.
  • Experience in data modelling, scalable ELT/ETL development, data lakes and data warehousing.
  • Strong algorithmic, programming and computational problem-solving skills.
  • Ability to quickly adapt to new technologies, concepts, and approaches.
  • Ability to speak, understand and write Spanish and English.

Nice to have, but not indispensable

  • Experience/knowledge in data pipeline and workflow management tools (e.g., Luigi, Airflow, Azkaban).
  • Experience/knowledge in machine learning and artificial intelligence.
  • Experience/knowledge in relational and No-SQL databases and data warehousing, including Postgres and Mongo.
  • Experience/knowledge with BI tools (e.g., Tableau, Qlik, Power BI).
  • Experience/knowledge in AWS or GCP and/or on-premise computing environments, storage and networking.

Salary and benefits

  • Annual salary: 31,500 – 40,000 €.
  • Long term and stable position.
  • Regular performance and salary reviews.
  • Nice, well-located office in the centre of Madrid.
  • Flexible working hours and possibility to work remotely up to 2 days per week.

We always reach out to all applicants, so wait for our call or e-mail over the next few days after your application is submitted.