Podme was founded in 2017 with a vision to enrich everyday life by taking podcast entertainment to its full potential. Podme is your go-to place for quality podcasts, and brings you a fresh selection of curated content, including popular titles found nowhere else.
Backed by Schibsted News Media , the largest media group in Scandinavia, we are now on a growth journey which requires us to strengthen the data-driven excellence in our company.
We are looking for a Data Engineer in our data team in Stockholm which focuses on our data foundation, building insights and predictions that contribute to a podcast experience that our customers really love.
Our tech stack in the team
- Data warehouse with BigQuery
- Programming language in Python
- ETL orchestrated by Airflow and DBT
- Cloud environment with GCP, including BigQuery, Cloud Storage, Dataflow, etc.
- Looker Studio for visualisation and reporting
- Version control with Git and CI/CD pipelines
As part of your daily work, you will:
- Design, build and maintain enterprise data warehouse
- Develop, deploy and manage data processing workflows, including data orchestration, ETL processes, data transformations and data quality checks
- Enable teams across Podme to develop data-driven products and services, collaborating with the Data Lead/Architect of the team, Data Analyst, the Head of Insights and other Podme stakeholders; to understand their data needs and ensure data accuracy, consistency and quality
- Team-up with cross-functional teams to identify business requirements and develop data-driven solutions
- Implement and maintain data security and privacy policies and practices
- Continuously improve data orchestration performance, reliability, and scalability
- Keep up-to-date with the latest technologies, trends and best practices in data orchestration
Our team’s environment is complex as we’re setting things up: for the first month we believe that you will have many questions and try to establish good team connections, afterwards we expect that you are an active team member who can contribute in designing and building dimensional modelling.
Who are we looking for?
We are seeking a skilled Data Engineer with proficiency in data warehousing and data lake architectures to join our team, someone who brings:
- Expertise in Data Warehousing: Proven experience working with data warehouses, showcasing professional SQL skills.
- Dimensional Modelling: Hands-on experience in the design and implementation of dimensional modelling, particularly star schema.
- Programming Skills: Strong programming skills, with a focus on Python, to contribute to the development of efficient data pipelines.
- Analytical Mindset: A strong sense of data to analyze information effectively and leverage insights in building data pipelines that tackle complex challenges.
- Self-Starter and Team Player : A self-motivated individual with excellent spoken and written communication skills, capable of working seamlessly in a team that is forming.
- Curious, Adaptable, and Collaborative: We are looking for a self-motivated individual with a natural curiosity, who thrives on investigating data intricacies before diving into the engineering process. Your role goes beyond technical expertise; you'll be a key player in a collaborative team where effective communication is paramount. As a good communicator, you should adeptly tailor your message to stakeholders with varying technical knowledge, ensuring seamless collaboration in our dynamic, data-centric environment.
It's an advantage if you have
- Cloud Platform Proficiency: Familiarity with Google Cloud Platform (GCP) and hands-on experience with BigQuery would be advantageous.
- ETL Mastery: Exposure to ETL tools such as Airflow, dbt, Apache Spark, and Apache Beam will be a significant asset, showcasing your ability to handle diverse data transformation challenges.
- A/B Testing Aptitude: Previous experience with A/B testing methodologies is a plus, demonstrating a broader understanding of data-driven decision-making processes.
- Pipeline Deployment Expertise: Knowledge of pipeline deployment using technologies like Docker, Kubernetes, and experience with CI/CD pipelines will be beneficial, underscoring your proficiency in deploying and maintaining robust data pipelines.
A little peek at what we offer
- Internal career growth opportunities
- The flexibility of working from home, our team works hybrid and is based in Stockholm
- Opportunity for development of competencies, conferences, and various knowledge-sharing events such as hackathons, innovation days, etc.
- Wellness allowance and healthcare insurance
- Access to Podme premium podcasts
Our interview process
- Recruiter screening (30 min): an initial call with the tech recruiter. We'll tell you a bit about us, answer any questions you may have, and learn about your background and what you're looking to do.
- Home assignment (up to 3 hours): a take-home exercise in Python
- Technical interview (90 min): data engineering oriented discussions together with a live SQL exercise and a follow-up on the take home exercise with 2 of our engineers.
Values Interview (60 min): A face-to-face conversation with a team member and the Head of Insights, focusing on your previous experience working within a team.
Offer extended! If you are interested in talking to more potential coworkers or have additional questions, we will also arrange any additional chats for you.
- Hiring form: full time, working 1-2 days a week from our office in Stockholm is both welcomed and encouraged
- Relocation package: not offered at this time
- Start: as soon as possible