Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm a fan of writing tests that can be either. Write your tests first such that the real dependencies can be run against. Snapshot the results to feed into integration test mocks for those dependencies so that you can maintain the speed benefit of limited test scope. Re-run against the real dependencies at intervals you feel is right to ensure that your contracts remain satisfied, or just dedicate a test per external endpoint on top of this to validate the response shape hasn't changed.

The fundamental point of tests should be to check that your assumptions about a system's behavior hold true over time. If your tests break that is a good thing. Your tests breaking should mean that your users will have a degraded experience at best if you try to deploy your changes. If your tests break for any other reason then what the hell are they even doing?





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: