Unless they've made significant improvements, this thing is borderline useless.
About all it does is act as a nice PR/gimmick. Magic thinking.
When last I checked about 5 years ago, they were including client-based stats in the analysis (time per move, focus, and timing between clicks to pick a piece up and drop a piece).
The issue with doing that is the client is fungible, you can set any state you want locally; and so by manipulating these stats you could skew the input going into these models.
To give people a layman's short overview of the model:
It uses a several CNN Pooling Layers for feature detection and LSTM (attention embeddings) in a siamese network architecture.
CNN models notoriously fail to detect features that that are larger than its kernel size.
The LSTM embeddings can only be as useful as the features that it is trying to detect.
There were a number of problems with the model at the time, a high false positive rate, overfitting, and too much weight was being given to client-controllable input.
When the client-controllable input was held stable (constant). Certain book openings would be skewed with additional weight towards a false positive, and earlier book moves activated more often than lesser seen positions.
I brought the issues up to ornicar years ago in an issue, but they closed the issue without comment. The posts are mostly gone now.
There were a number of disagreements (mostly about certain people with admin privileges abusing their authority, not sure if it was ornicar or one of their flunkies; either way not important just completely unprofessional).
I guess it was good enough for them despite the high false positive rate (driving account churn).
About all it does is act as a nice PR/gimmick. Magic thinking.
When last I checked about 5 years ago, they were including client-based stats in the analysis (time per move, focus, and timing between clicks to pick a piece up and drop a piece).
The issue with doing that is the client is fungible, you can set any state you want locally; and so by manipulating these stats you could skew the input going into these models.
To give people a layman's short overview of the model:
It uses a several CNN Pooling Layers for feature detection and LSTM (attention embeddings) in a siamese network architecture.
CNN models notoriously fail to detect features that that are larger than its kernel size.
The LSTM embeddings can only be as useful as the features that it is trying to detect.
There were a number of problems with the model at the time, a high false positive rate, overfitting, and too much weight was being given to client-controllable input.
When the client-controllable input was held stable (constant). Certain book openings would be skewed with additional weight towards a false positive, and earlier book moves activated more often than lesser seen positions.
I brought the issues up to ornicar years ago in an issue, but they closed the issue without comment. The posts are mostly gone now.
There were a number of disagreements (mostly about certain people with admin privileges abusing their authority, not sure if it was ornicar or one of their flunkies; either way not important just completely unprofessional).
I guess it was good enough for them despite the high false positive rate (driving account churn).