Maybe I read it differently from you, but it states
"You can use resources (e.g. publications on Google Scholar, Wikipedia articles, interactions with LLMs and/or human experts without sharing the paper submissions) to enhance your understanding of certain concepts and to check the grammaticality and phrasing of your written review. Please exercise caution in these cases so you do not accidentally leak confidential information in the process."
From my reading then that would prohibit putting the paper into an openAI service, but how an interaction with a local LLM that doesn't involve sharing anything is treated is unclear. If you had an airgapped GPU rig running a local model and you formatted all storage on it after you were done, then no information would be shared, as you are just doing a bunch of math operations on it on your own machine.
you are not allowed to share the unpublished results with anyone or any LLM, period. This is literally in every review policy (e.g. https://neurips.cc/Conferences/2025/CallForPapers)