I think you missed the point. The claim is that AI training from public information is no different from humans learning from public information.
My argument is precisely that the mechanisation of information is fundamentally different from the scale at which a human can learn. One immediate consequence being there is no longer a natural brake on the scale of what can be sourced for use in a derivative work.
To be clear this is not a value judgement, just to point out that it _is_ different, just as driving is fundamentally different from what one can do with one’s own feet. Of course the mechanisation of transport is history and seems daft to argue against. But it is different. Whether that’s good or bad is a much harder question.
I can agree that it’s different, but driving is not fundamentally different from walking, in that both get you from one place to another. Nobody drives to a place because it’s fundamentally different from walking there, they do it purely because it’s faster and leaves them less tired.
I think the same thing is true for AI. Or at least, for training or acting on public information. It’s not suddenly bad because you are able to do it on all information in existence at the same time.
My argument is precisely that the mechanisation of information is fundamentally different from the scale at which a human can learn. One immediate consequence being there is no longer a natural brake on the scale of what can be sourced for use in a derivative work.
To be clear this is not a value judgement, just to point out that it _is_ different, just as driving is fundamentally different from what one can do with one’s own feet. Of course the mechanisation of transport is history and seems daft to argue against. But it is different. Whether that’s good or bad is a much harder question.