This is a question for the OOD team.
I have noticed that some OOD apps that OSC publishes, which can use OpenGL graphics (Ansys, Matlab, VMD, Paraview, …) use VirtualGL. VGL needs X server running on the compute node, which in turn will sit on the GPU and potentially eat away resources from the CUDA computational jobs that may run on that GPU.
So, my question is, how does OSC technically allow VirtualGL? Do you run X Server on all your compute nodes? Or, start it with a job start using a flag (which I don’t see in the OOD apps so probably not). Or have cheap GPUs in each node used for GL that are independent from the computational GPUs? Or, something else?
I’d appreciate some details that would allow us to consider such deployment over here.
Naturally, if other HPC centers have their own solution for compute node GL rendering it’d be great to hear them.
What we do over here is to have a set of standalone (interactive) nodes that run X and VGL on mid-range Nvidia GTX cards, but, we don’t have any X or VGL on the cluster compute nodes. Most of our computes only have onboard video cards and our GPU nodes are heavily utilized with computation, but, we’d like to see if there is a room for using the GPU nodes for GL with OOD apps like Ansys or Paraview.