Success/failure is just a binary classification signal: you can look at how it correlates with your target variables, how noisy it is for particular choices of variables over multiple trials, etc. The noisier it is, the harder it is to learn, but such is life.
I think it would be impractical for a naive fitness function that is 0 for failure and 1 for success. Wouldn't the signal be too difficult to find? GA would be brute force until you find code that passed a test.
I don't think tests for factorio are trivial.
Maybe you could move the goal/fitness function along the way. So start with something that compiles. Then having the desired input output. Etc.
Addendum. More tests. I see. Would you then lead the algorithm along a trajectory. If you can pass this simple test you would probably be able to pass this as well then this... Babysitting it along the way. Ideally you wouldn't need to, but maybe to make it possible..
Check out Thompson Sampling and multi-armed bandit problems for how this can work out in real life. (I tend to think this approach is much better than genetic algorithms...)
Each 'bandit' is a random boolean outcome, governed by some hidden success probability. Thompson Sampling trades of exploration and exploitation. If there's no successes ever, then all bandits are equally bad, and you just keep exploring randomly until you find some success (or give up). If you do have some success, you can try to exploit it.
For a problem with continuous parameters, you can discretize the parameter space by binning, and then choose randomly within a bin for each trial. 'Exploiting' a particular bin might lead to breaking it into more bins for finer resolution.