If you're interested in using this card for local LLM's, I would recommend reading this Reddit post first. Intel's AI ecosystem is not ready for prime time yet. It will get there, and it will probably take some years. But it's not there right now. So if you want to save yourself a lot of time and headache, you're better off spending extra money on a more proven solution.
That said, if you carefully vet your use case and have confirmed the existing tooling/compatibility on the Intel side is good enough, then great. But either way do your due diligence.