Run commands at shell access from env

Hey, I would like to execute a command at shell access.
For me it’s ok if the user can see the commands so I would just do something like
term.write("module swap cluster/"+process.env.CLUSTER); so it swaps in the correct cluster.

The only problem I find is that de shell app doesn’t behave like a ‘normal’ app, it seems to ignore my template/before.sh that I would use to set the CLUSTER env variable.

I’ve been struggling with this for a long time now :confused:

It’s probably easier to add an sh file to your clusters’ /etc/profile.d/ folder. Especially if you manage your infrastructure through tooling like Ansible, Puppet, Chef, and so on.

Otherwise you’ll have to hack the shell source code. Not sure I’d recommend that, but I can look into it if you really need. Again, I’m sure it’d be easier (simpler and probably more maintainable) to just drop a file in that folder. If you don’t already use that tooling, a simple Ansible script can easily drop a file across hundreds of nodes.

Hm yes, but our configuration has only one login node, from which you can execute module swap cluster/xxx. This sets everything ready to run commands on that cluster. But you are still sshed to that same login node.

This looks very similar to Possible to make shell-access app start on a compute node via "srun --pty bash"?

I opened an issue on that to support configuring the shell app easily with custom commands. I’ll add this use case to that issue. Meanwhile, you could use the interim solution that I suggested in that Discourse topic, or modify the shell app directly.