Work Experience
As the Data Scientist in the Analytics team, I deliver insights to various
stakeholders within the organisation, as well as tools that help our customers
become better investors. This work can take different shapes and forms,
such as building recommender systems using matrix factorisation models,
by helping with campaign evaluation using Causal Impact modeling, or performing
significance testing of marketing campaigns.
I have implemented regression and classification models for different
tasks, such as budget allocation. These models are often developed (and
built) using frameworks such as scikit-learn, CatBoost, XGBoost and Keras.
During a platform upgrade, I helped evaluate the expected performance
of the system using prophecy forecasting.
I developed a conversion tool to transform files using the closed Qlikview
data format (QVD) to the open Parquet format, to ease migration tasks.
I have been part of the Non-Maturity Deposit risk modeling team.
During this time, I have also worked with the Google Cloud Platform, where
I have built ETL pipelines using, among other components: Cloud Functions,
BigQuery, AI Notebooks (Jupyter), Cloud Composer (Apache Airflow), Cloud
DataFlow (Apache Beam). Increasingly, Terraform has been used to keep our
growing cloud infrastructure in committable code form.
Highlights
As part of the Exploratory Analytics team, I worked on building a platform
to enable Data Scientists and Analysts by giving them access to data from
various sources in the cloud platform Azure.
The service was built, using various components, such as: Azure Functions,
Azure Data Factory, Azure Data Lake Analytics and Azure SQL Data Warehouse.
Development consisting mostly of SQL (T-SQL, U-SQL with C#/LINQ and Databricks/Spark
SQL), but also some programming in NodeJS.
Highlights
Highlights
Part of a team working with cloud in the Adtech field, developing a new
metric ("Seen") within digital marketing funnels.
Responsible for setting up a scalable solution for analysis of user behavioural
data for reporting and visualisation, with GDPR compliancy in mind.
Development in AWS, using SQL, Python and NodeJS, and AWS services: Kinesis,
Athena and Lambda. Development followed a "GitOps/infrastructure-as-code"
methodology.
Highlights
(Consulting highlights at various exciting companies in Sweden described
separately. See above).
Work also included shorter consultations involving design and implementation
of ETL pipelines on the Azure cloud platform. As an example, a realtime
pipeline, visualising patient blood pressure data using Azure functions,
Azure stream analytics and PowerBI.
Highlights
Highlights
Highlights
Highlights
- Evaluation of various target language backends.
- Tremendous speedup. Before: over an hour; After: in minutes (single digit).
- Code was mainly written in C/C++.
- Evaluation using a large Beowulf cluster.
- Analysis of large data sets.
Highlights
Highlights
- Roles: Software Architect, Developer, Tester.
- Designed and implemented a client/server protocol.
- Implementation mostly in Java and Swing.
- Agile development model.
- First to migrate to Linux and OS X.
- Introduced open-source tools to the team (CVS, Doxygen, Netbeans).
Publications
Skills
Machine learning
Keywords:
Programming
Keywords:
Databases
Keywords:
Tools
Keywords: