Conventional game AI is usually search algorithms for movement (like A*) + finite state machines for behavior. No network calls to LLMs, no machine learning, etc. At the fringes, throw in the odd markov chain for procedural text generation.
Basically AI post 2019 usually means LLM, and they're making the distinction that this is not that.
All AI systems (including A* and LLMs) can be thought of as a system that explores a search space to obtain a certain goal. At least this is what I understood from reading artificial intelligence -a modern approach by Peter norvig.
Both A* and deep learning explores a search space based on a goal. The difference is DL explores when it's training and learns to use the right moves for a given input.
Artificial Intelligence - a Modern Approach was published in 1995.
It should be fairly obvious that what we think of as 'conventional AI' might have changed in the last 28 years, even if we hadn't just been living through a twelve month or so explosion in the availability and power of generative AI models that transformed what people associate the term with.
We are talking about 'conventional AI' in terms of games though. I don't think there was any explosion apart from the fact you can use generative AI to make art.
That book has the latest edition in 2020. But yeah, A* is search space based, LLMs or DL I wouldn't call it search space based at all. It's function fitting.
It still seems worthwhile to make the distinction; it might be possible to think of them as the same thing at a high enough level, but the actual libraries and algorithms used are different.
Any book recommendations for conventional game AI? (Particularly with the "game" part, as games always have interesting constraints.) Thanks in advance.
I can't think of any books off hand, it'll vary depending on what kind of game you're making. Like the AI in F.E.A.R. will be different from Starcraft, which different than that in XCOM, etc.
And if you're scaling difficulty, then you're likely tuning how the AI behaves.
GDC, GDCVault, academic papers, and dev blogs//post mortems of similar games to what you're interested in will have a lot of good information.
Planning / A* search is hardly a bunch of ifs, and is a crucial component of these systems. Expert systems are closer to a bunch of ifs, but that overlooks that the challenge is in coming up with the conditions, not writing if statements.
Code format for a 1 line like this example is nowhere close to # of lines in other files, the networking delays, and also can't be done client-side with current hardware to the scale needed for multiple entities all weighted uniquely in the current 2023 "AI" tools.
Not the person you're asking, but I think it's clear from context that they meant "no artificial neural networks" and other forms of AI that are trained from data. From the Github repo:
It provides:
Finite State Machines
Behavior Tree
Utility AI
Goal Oriented Action Planning
I'm not sure you can really design a well put together experience around a DL agent, but if you can, it might as well just be handcrafted with some of these abstractions anyway.
Because in the end, you essentially need high understanding/constraints of how it will behave otherwise you've lost control over the experience as the designer.