Isn't it even in theory not possible for any algorithm (be it an ANN or otherwise) provide a solution to an infinitely variable chaotic problem based on solutions from a less-than-infinite set of other chaotic problems?
Isn't essentially they key concept of chaotic problems that they aren't predictable, so there is no real "pattern" to train on so to speak?
If there weren't a real underlying system how would the universe function operationally? How do the bodies "know" how to move even though we can't predict them past a certain level of inaccuracy?
Suppose we know the system's state and rules exactly. This can't be true in the real universe, but we can construct classical, deterministic systems in pure math that we can say that about, and even those systems will exhibit this characteristic we're talking about.
You can look at it from the point of view of, if we watch the system evolve, can we tell whether the rules were violated at some point, by some arbitrarily small amount? As the chaotic systems evolve, it becomes harder and harder to tell if that is the case. There isn't a discrete transition from knowing to not knowing; our level of knowing goes down over time.
In information theory, we can see that as a loss of bits of precision on the system, requiring more and more bits initially to make up for it. Since we can't compute with real numbers, but only approximations given increasingly more bits over time, even in the pure mathematical case where everything is perfect and specified, we still lose this knowledge as the simulation progresses. It's that much worse in the real world, where we don't even start with all that many bits of precision.
It's not quite the question you asked, but... it's like the shadow of the question you asked, and it's a bit easier to explain. (And reasonably mathematically valid. You can characterize chaotic systems by how many bits they lose per time unit.)
This is such a great response that I’m not even going to voice my metaphysical objections because I’m absolutely certain you already know them and perhaps even explore them yourself. Have you got any recommendations for further reading?
It depends on the direction you want to take, but there is a mathematical basis to what I'm saying, not just a philosophical one. Unfortunately I've never scared up the name of the concept, or at least not in any form I can conveniently Google for. I'm not sure there is a non-textbook version of what I'm talking about; I'd like to see it myself.
It's related to the ability to read a Lyapunov exponent as a measure of bit loss. Lyapunov exponent is easy to Google up, and if you understand that and information theory it's not a difficult leap to make, but I can't find any nice explanation for people who don't already have those things.
The universe functions because all the rules are in place. But predicting the outcome of those rules without actually "replaying" them may be impossible.
Example 1: Conway's game of life.
Example 2: Collate conjecture.
If the number is even, divide it by two.
If the number is odd, triple it and add one.
The rules are deterministic. But you can't do any predictions other than running the simulation.
Isn't essentially they key concept of chaotic problems that they aren't predictable, so there is no real "pattern" to train on so to speak?