4 min read · 2020-05-30 · A data science platform can change the way you work. It’s more than just a tool, it’s a way to wrangle data and turn every member of your team into a high performing unit, capable of pivoting and…
7 min read · 2020-03-01 · A typical interview process for a data science position includes multiple rounds. Often, one of such rounds covers theoretical concepts, where the goal is to determine if the candidate knows the…
6 min read · From 2018 · All your employees need to understand the basics of analyzing data.
Relegating all data knowledge to a handful of people within a company is problematic on many levels. Data scientists find it frustrating because it’s hard for them to communicate their findings to colleagues who lack basic data literacy.
Business stakeholders are unhappy because data requests take too long to fulfill and often fail to answer the original questions. In some cases, that’s because the questioner failed to explain the question properly to the data scientist.
7 min read · From 2019 · Beware the data science pin factory.
Coefficients, models, model types, hyper parameters, all the elements you’ll need must be learned through experimentation, trial and error, and iteration. With pins, the learning and design are done up-front, before you make it. With data science, you learn as you go, not before you go.
When data scientists are organized by function, the many specialists needed at each step, and with each change, and each handoff, and so forth, make coordination costs high. For example, statistical modeling specialists who want to experiment with new features will have to coordinate with data engineers who augment the data sets every time they want to try something new. Similarly, every new model trained means the modeler will need someone to coordinate with for deployment. Coordination costs act as a tax on iteration, making it more difficult and expensive, and more likely to dissuade exploration. That can hamper learning.
organize the data scientists such that they are optimized to learn. This means hiring “full stack data scientists”—generalists—that can perform diverse functions: from conception to modeling to implementation to measurement.