Data Scientist
emagine Polska
⚲ Stockholm City
Wymagania
- GitHub
- Data analysis
- Documentation
- Business Intelligence (BI)
- Microsoft Platform
- Atlassian Confluence
- SQL
- Python
- Testing
Opis stanowiska
Role Overview We are looking for a Data Scientist contractor to support additional reporting and analysis needs driven by the new self-service Confidence launch. With a team member on parental leave and limited remaining bandwidth, this role will provide critical capacity for routine analytics, experiment quality monitoring, and stakeholder reporting over a 6-month engagement. This is an individual contributor role focused on applied analytics and reporting — not research or model development. The ideal candidate is a reliable, self-directed analyst who can ramp quickly on our data infrastructure and deliver clean, well-communicated insights to product and engineering stakeholders. Responsibilities • Build and maintain dashboards and reports that track Confidence platform adoption, experiment volume, and experiment quality (EwL metrics, win rates, learning rates). • Perform routine and ad-hoc analyses on experiment health, including traffic validation, sample ratio checks, metric pipeline monitoring, and guardrail deterioration tracking. • Support the Metrics Catalog by validating metric definitions, computing experiment-level results, and troubleshooting data quality issues in BigQuery. • Analyze user behavior within the Confidence product (internal and external) to inform product development priorities and self-service adoption. • Prepare summaries and data narratives for stakeholder reviews with engineering leadership, translating experiment data into actionable recommendations. • Collaborate with data scientists, analytics engineers, product managers, and engineers across the Experimentation Platform organization. • Contribute to documentation and knowledge sharing to reduce key-person dependencies on the team. Required Qualifications • 4 years of experience in a data science, analytics, or applied statistics role, ideally in a product or platform context. • Strong proficiency in SQL (BigQuery preferred) for querying large-scale warehouse data. • Strong proficiency in Python for data analysis, including pandas, NumPy, and visualization libraries (Matplotlib, Seaborn, or Plotly). • Solid understanding of A/B testing and experimentation fundamentals: hypothesis testing, p-values, confidence intervals, statistical power, sample size estimation. • Experience building and maintaining dashboards and reporting pipelines (e.g., Tableau, Looker, Streamlit, or similar BI tools). • Ability to communicate analytical findings clearly to both technical and non-technical audiences. • Comfort working independently with minimal supervision in a distributed team environment. Preferred Qualifications • Familiarity with sequential testing methods (group sequential tests, always-valid inference) or Bayesian experimentation approaches. • Experience with dbt for data transformation and pipeline management. • Exposure to variance reduction techniques (CUPED or similar) in experimentation. • Experience with experimentation platforms (Confidence, Optimizely, LaunchDarkly, GrowthBook, or similar). • Familiarity with experiment quality frameworks or meta-metrics (e.g., learning rate, validity checks). • Background in platform or infrastructure analytics (as opposed to purely product or marketing analytics). • Degree in statistics, data science, economics, computer science, mathematics, or a related quantitative field. Tech Stack Data Warehouse: Google BigQuery Query Language: SQL (BigQuery dialect) Programming: Python (pandas, NumPy, SciPy, statsmodels) Data Transformation: dbt Visualization / BI: Tableau, Looker Studio, Streamlit Statistics / Testing: (Python library), SciPy, statsmodels Experimentation: Confidence platform (internal + external) Collaboration: Google Workspace, Slack, Confluence Version Control: GitHub Enterprise Start: ASAP Length: 6 months Workplace: Sweden