Q/a Processes for Complex Anlysis?

For the data scientists and quants out there… How do you sanity check your assumptions and sampling methods during a large analysis? At some point in our careers, we’ve all run into the occupational hazard of going down the proverbial “rabbit hole” during a complex synthesis of multiple data sets, and ended up with conclusions that weren’t necessarily supported.

Curious what Q/A checks you use, particularly with projects tackled by a team, to avoid overstating the applicability of findings – maybe the equivalent of a “stage/gate” process (or some other approach) to ensure the team is staying on mission?

On a related topic – how do you keep track of compounding error in a layered analysis with many assumptions and several iterations of sample subsetting?

Would love your thoughts on this, in an article I am writing. The topic is how organizations and individuals can apply objectivity and critical thinking THROUGHOUT a project to ensure they’re making reliable conclusions and recommendations.

Looking forward to your suggestions!


Comments URL: https://news.ycombinator.com/item?id=43250243

Points: 1

# Comments: 0