Not sure if I agree. In physics you typically have the analytical solution, but it's expensive to evaluate (integral over many dimensions) so you use machine learning as a cheap way to approximate it.
In most other cases of machine learning there is no "objective" solution and hence no "target function" to approximate.
Well, all of supervised learning is basically approximating an unknown function from a finite list of samples.
But it's still an approximation, with things like e.g. backpropagation 'simply' (in the abstract mathematical sense) tweaking weights in the direction of the derivatives to get closer to expected values.
The vast majority of machine learning just builds on that by going deep (more layers), automatically generating inputs (e.g. in game AIs playing against themselves), etc.
One might argue that's even worse than function optimisation as you can only vaguely guess at the target and thus all your validation is suspect and you have to prove it using humans by, for instance, beating them at Starcraft.
> In most other cases of machine learning there is no "objective" solution and hence no "target function" to approximate.
A crucial step of any AI/ML project is to define this objective solution/target function. For example, a task like "classify photos into cats and dogs" cannot be solved by an ML system: it's too ambiguous and ill-defined. We can define a specific, unambiguous task, which we feel is somehow similar to "classify photos into cats and dogs", but it wouldn't actually be the same task.
For example, "minimise the average L2 loss across these million example inputs" is a specific task, which we can hence use ML approaches to solve. This task has an objective solution: return 100% cat for all the inputs labelled cat, and 100% dog for those labelled dog. Interestingly, a perfect approximation of this target function would probably be considered a poor solution to the original, fuzzy problem (i.e. it will over-fit); although again that would be an ambiguous, ill-defined statement to make.
There are many ML problems which aren't of the 'fit these examples' type, but they still have some explicit or implicit target function; e.g. genetic algorithms have an explicit fitness function to maximise/minimise.
Even attempts to 'avoid' this "blindly optimise" approach (e.g. regularisation, optimistic exploration, intrinsic rewards, etc.) are usually presented as an augmented target function, e.g. "minimise the average L2 loss ... plus the regularisation term XYZ"
FTFY. You would be entirely correct.