Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I really dislike this idea of testing in go: only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.

I find this types of tests incredibly coupled with the implementation, since any chance require you to chance your interfaces + mocks + tests, also very brittle and many times it ends up not even testing the thing that actually matters.

I try to make integration test whenever possible now, even if they are costly I find that the flexibility of being able to change my implementation and not break a thousand tests for no reason much better to work with.



I'm a fan of writing tests that can be either. Write your tests first such that the real dependencies can be run against. Snapshot the results to feed into integration test mocks for those dependencies so that you can maintain the speed benefit of limited test scope. Re-run against the real dependencies at intervals you feel is right to ensure that your contracts remain satisfied, or just dedicate a test per external endpoint on top of this to validate the response shape hasn't changed.

The fundamental point of tests should be to check that your assumptions about a system's behavior hold true over time. If your tests break that is a good thing. Your tests breaking should mean that your users will have a degraded experience at best if you try to deploy your changes. If your tests break for any other reason then what the hell are they even doing?


> I really dislike this idea of testing in go: only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.

Same I have zero confidence in these tests and the article even states that the tests will fail if a contract for a external service/system changes


I see this kind of testing as more for regression prevention than anything. The tests pass if the code handles all possible return values of the dependencies correctly, so if someone goes and changes your code such that the tests fail they have to either fix the errors they've introduced or go change the tests if the desired code functionality has really changed.

These tests won't detect if a dependency has changed, but that's not what they're meant for. You want infrastructure to monitor that as well.


If you're testing the interface, changing the implementation internals won't create any churn (as the mocks and tests don't change).

If you are changing the interface, though, that would mean a contract change. And if you're changing the contract, surely you wouldn't be able to even use the old tests?

This isn't really a go problem at all. Any contract change means changing tests.


Yes, agreed. What the parent is saying about

> only ever use an interface, never the real implementation + mockgen the mocks based on this interface + use the mocks to assert that a function is called, with exactly this parameters and in this exact order.

is not ideal, and that's what we don't do. We test the real implementation, then that becomes the contract. We assume the contract when we write the mocks.


Don't you mean testing the interface of the implementation? I see nothing wrong with that, if so.


They mean the dependencies. If you’re testing system A whose sole purpose is to call functions in systems B and C, one approach is to replace B and C with mocks. The test simply checks that A calls the right functions.

The pain comes when system B changes. Oftentimes you can’t even make a benign change (like renaming a function) without updating a million tests.


Tests are only concerned with the user interface, not the implementation. If System B changes, that means that you only have to change your implementation around using System B to reflect it. The user interface remains the same, and thus the tests can remain the same, and therefore so can the mocks.


I think we’re in agreement. Mocks are usually all about reaching inside the implementation and checking things. I prefer highly accurate “fakes” - for example running queries against a real ephemeral Postgres instance in a Docker container instead of mocking out every SQL query and checking that query.Execute was called with the correct arguments.


> Mocks are usually all about reaching inside the implementation and checking things.

Unfortunately there is no consistency in the nomenclature used around testing. Testing is, after all, the least understood aspect of computer science. However, the dictionary suggests that a "mock" is something that is not authentic, but does not deceive (i.e. not the real thing, but behaves like the real thing). That is what I consider a "mock", but I'm gathering that is what you call a "fake".

Sticking with your example, a mock data provider to me is something that, for example, uses in-memory data structures instead of SQL. Tested with the same test suite as the SQL implementation. It is not the datastore intended to be used, but behaves the same way (as proven by the shared tests).

> checking that query.Execute was called with the correct arguments.

That sounds ridiculous and I am not sure why anyone would ever do such a thing. I'm not sure that even needs a name.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: