Share:
Share:
In the world of data science, where new tools emerge almost weekly and expectations evolve just as fast, students often wonder what truly matters when building a career in this field. For Leon Staubach, Data Science Fellow at Open Avenues Foundation, the answer has less to do with chasing the newest technologies and more to do with developing the mindset to navigate a constantly shifting landscape.
Leon’s path from Industrial Engineering into applied machine learning gave him a systems-first way of thinking one that focuses on understanding problems deeply, prioritizing what drives real product impact and learning how models behave when deployed in the real world. Through his Build Projects, he helps students experience this firsthand not just how to model data, but how to frame problems, collaborate across teams, evaluate ethical implications and build solutions that remain relevant as users and environments change.
In this Coffee Chat, Leon shares the trends he believes students should pay attention to, the habits that distinguish effective data scientists and the judgment required to build responsible, long-lasting ML systems. His reflections offer students a clear, practical window into what it takes to grow and stand out in today’s data-driven world.
A: I actually came to data science not from a traditional statistics or computer science route, but through Industrial Engineering, which gave me a systems-oriented mindset. At first, I was exposed more to general software engineering, but especially during my master’s degree at the University of Wuppertal, I got into data science. Several courses around Big Data and AI drove my interest in pursuing a career in data science, since I was fascinated by the new way of solving real-world problems. The rules for solving real-world problems no longer need to be written by a software engineer but could now be learned by an ML algorithm. One of the defining moments was when I joined Breinify as a Machine Learning Engineer. Working in a startup-style environment forced me to wear many hats – research, production, data engineering, model deployment. It was at Breinify where I saw first-hand how small, well-designed ML models could drive real user personalization and product impact. Being in a fast-paced startup also taught me how to ruthlessly prioritize: with limited time and resources, I learned to focus only on the pieces of a data science project that directly moved the product forward. Typically, that meant choosing simpler models, tightening the scope, or delivering iterative value instead of chasing perfect solutions. That experience solidified me that I didn’t just want to “do data science,” but to do it in a way that directly shapes products and user experiences.
A: Given where I sit, I believe the most valuable advice here is to stay flexible. New frameworks, models, deployment solutions, and more are coming out every week, and it’s almost impossible for an individual to keep up with all the updates. Different companies use different solutions, so it’s most important to be able to learn new technologies quickly. There is no benefit in only learning one product just to have it become obsolete the next month. When I evaluate new technologies, I look for signals that they solve a real, recurring problem and I assess whether they integrate well into existing workflows, have strong community support, and show early signs of industry adoption; if they fail those tests, they’re usually just short-lived trends.
A: Always be curious and open to learning new processes and technologies. Things are changing faster every day, and it’s crucial to not be stuck in one system.
Coming from the start-up world, it’s also super important to take ownership and responsibility over the work you are doing. Be proud of your work and put in the effort to reflect on this.
Furthermore, utilize the new emerging tools being developed. AI is making existing processes more efficient, which should not be perceived as a scary thing, but as a chance to optimize your own time-usage. Make yourself more valuable by learning these tools and improving your efficiency as a developer.
Specifically, when I mentor students, the biggest signal that someone is ready for more complex, ambiguous tasks is when they start proactively identifying next steps and asking higher-level questions about the “why” instead of just the “how”.
A: AI tools are getting better every year, but what stands out is judgment. Tools can generate code, structure a model, or automate data prep, but they can’t yet fully understand context, business constraints, or the subtle tradeoffs behind a recommendation. Students can differentiate themselves by developing strengths in areas automation can’t replace:
If you combine strong technical foundations with thoughtful problem-framing and clear communication, you offer something uniquely valuable: the ability to turn AI tools into meaningful, impactful solutions rather than simply using them at face value.
When I work with coworkers or students, the way I help them develop sharper questions is by pushing them to articulate the problem from multiple angles: business, user, data, and assumptions. This has to happen before touching the code, because this reflection naturally uncovers gaps, hidden constraints, and better hypotheses that lead to far more effective and focused ML work.
A: It’s important to understand that ML solutions are living and evolving systems. The environment constantly changes, for example user behavior, product features, data patterns, and more. Ethics is another dimension that’s often underestimated. Even small personalization models can create feedback loops or unintended bias, so understanding evaluation beyond accuracy (for example, fairness metrics, user impact, and risk assessment) is crucial. Ultimately, sustainable ML today requires a combination of responsible design, continuous monitoring, and a willingness to iterate as the real world changes. Students who internalize this early will be better prepared for building trustworthy AI systems. One example that shaped my view on long-term ML stewardship was when ongoing monitoring revealed that a model’s performance was degrading because user behavior on the platform had shifted after a major product update; we had to pause deployment, retrain the model with new behavioral signals, and redesign parts of the pipeline. That experience taught me that reliable ML isn’t just about building a good model, it’s about consistently revisiting assumptions, validating that the system still reflects reality, and being prepared to change direction when the data tells you something new.
Final Thoughts
Leon’s insights paint a picture of data science as a field defined not just by algorithms but by adaptability, thoughtful decision-making, responsible iteration and an understanding of how technical choices connect to real-world outcomes. For students preparing to enter this space, his guidance offers a practical roadmap focus on learning quickly, ask sharper questions before touching the code, understand the business context behind every model and cultivate the judgment that automated tools can’t provide.
Stay updated on all things Open Avenues!