Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
satvikpendem
33 days ago
|
parent
|
context
|
favorite
| on:
A guide to local coding models
No? First of all you can limit how much of the unified RAM goes into VRAM, and second, many applications don't need that much RAM. Even if you put 108 GB to VRAM and 16 to applications, you'll be fine.
brulard
33 days ago
[–]
How about the rest of the resources? CPU/GPU? Would your work not be affected by inference running?
satvikpendem
33 days ago
|
parent
[–]
AI doesn't really use much CPU. In a simple answer, no your work would not be affected.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: