Too often, we meet financial data scientists who are completely unhappy with their current employers. It seems the cause of this is often related to the lack of projects in production, but is this really due to the current employer's lack of vision? We believe this is not the case (or at least partially). We think financial data scientists will become the next generation of financial professionals due to megatrends in the industry: data, AI, infrastructure. There’s still a no marginal risk to become obsolete in 5-10 years due to automation, but there is only one way to fight back: pragmatic learning

We have been in financial data science long enough, when it was called quantitative finance, to see some clear patterns that we believe are detrimental to the career of a financial data scientist:

This list holds true no matter if you are involved in trading, research, risk management, pricing or any other financial activity. We assume already a good understanding of the foundational mathematics and statistics, so typically our audience is professionals with STEM and/or economics background.

We conclude our guide providing a list of things that would massively boost your productivity and your team productivity and some words of wisdom

Lack of core fundamental knowledge in computer science (leveraging python)

Don’t end up spending your days on a notebook, data science is evolving. In order to see quickly things in production you need to be useful to the Development team. We embrace the concept of quant dev in Etna: everybody could produce production-ready code. Your python libraries can be used in production (if and only if they are tested appropriately). No matter your background we think you need to study these courses to have a strong foundation in python and computer science, scripting mindset will not help you staying relevant in the market.

Python 3: Deep Dive (Part 1 - Functional)

Python 3: Deep Dive (Part 2 - Iteration, Generators)

Python 3: Deep Dive (Part 3 - Dictionaries, Sets, JSON)

Python 3: Deep Dive (Part 4 - OOP)

Depending on how hardcore you want to get, there may be times when you need superpowers to speed up your data science projects. Often, you'll need to use an external binding to speed up your code. At Etna, we use Numba AOT (or Cython) to compile slow loops that can be used in production, instead of Numba JIT which is fine for research and development, but not really designed for production.

Computer Science fundamentals is the driving force of data science productivity, if you want to really master the art of building scalable data science projects check the following books too:

  1. Software Architecture: The Hard Parts: Modern Trade-Off Analyses for Distributed Architectures