Creating Memory and GPU Attribute for Jupyter

Hello,

I would like to add memory and GPU attributes to the launcher form and am struggling to figure out how to do it. I used the info from the below thread to add the widgets, but I’m not sure where I need to configure the back end to utilize the selections upon job launch. Any info would be greatly appreciated!

Thanks,
Aragorn.

You should use the native attributes. Look at how we do for Matlab at OSC as an example. You can browse that whole organization for bc_<app> related things with all sorts of examples.

We’re using the native attribute to specify the cli arg ppn=<%= ppn %><%= node_type %>". We use torque, that’s why there’s a native/node/resources hierarchy in the Matlab YML.

Something like this should work for you. I think ppn is the cli flag for hardware requests for pbspro? In any case native relates directly to cli flags, so if you want to pass a cli flag called foo you’d directly call it out like so foo=<%= my_foo_param %>.

script:
  native: "<%= bc_num_slots.blank? ? "1" : bc_num_slots.to_i %>:ppn=<%= ppn %><%= node_type %>"
  # directly specify script/native. 

Also note that you can pass in many flags/arguements here, not just one. so native: "foo=bar a=b c=d --long-arg" would pass all those into the cli command.

For future generations, this is what I wound up having to do to get pbspro to work, hopefully this will save some pain for someone:
form.yml.erb:

cluster: “grid”

#attributes:

modules: “grid-default”

extra_jupyter_args: “”

form:

  • modules
  • extra_jupyter_args
  • bc_queue
  • cores
  • memory
  • bc_email_on_started

attributes:
modules: “grid-default”
extra_jupyter_args: “”
cores:
widget: “number_field”
label: “Number of CPU cores [ 1 - 64 ]”
value: 1
min: 1
max: 64
step: 1
id: ‘cores’
memory:
widget: “number_field”
label: “Amount of RAM [ 1 - 1507 ]”
value: 1
min: 1
max: 1507
step: 1
id: ‘memory’

submit.yml.erb:

<%-
ncpus = cores.blank? ? 1 : cores.to_i
mem = memory.blank? ? 1 : memory.to_i
%>


script:
native: ["-l ncpus=<%= ncpus %> mem=<%= mem %>GB"]

That gives you the option to specify queue, cores, memory, and whether you want an email. :slight_smile:

Thanks for the assistance!
Aragorn.

I’m trying to do same for RStudio application

Here is my form.yml file

---
cluster: "blue"
form:
  - bc_account
  - bc_queue
  - bc_num_hours
  - bc_num_slots
  - cores
  - bc_email_on_started
attributes:
  cores:
    widget: "number_field"
    label: "Number of cores"
    value: 1
    help: |
      Number of cores on node type (3 GB per core.)
    min: 0
    max: 32
    step: 1
    id: 'cores'
  bc_num_slots: "1"

submit.yml.erb reads as follows:

---
<%-
ncpus = cores.blank? ? 1 : cores.to_i
%>

batch_connect:
  template: "basic"
  script:
  native:  "<%= bc_num_slots.blank? ? "1" : bc_num_slots.to_i %>:n=<%= ncpus %>" 

I do get correct menu items, however, instance always starts with one core and one node. I need one node, but I’m looking for the ability to change the number of cores.

This is the job_script_options.json file generated by the RStudio job

{
  "job_name": "sys/dashboard/dev/RStudioL",
  "workdir": "/home/users/neranjan/ondemand/data/sys/dashboard/batch_connect/dev/RStudioL/output/0be9e7f2-e44e-4ca3-a0dc-c04b9667ca45",
  "output_path": "/home/users/neranjan/ondemand/data/sys/dashboard/batch_connect/dev/RStudioL/output/0be9e7f2-e44e-4ca3-a0dc-c04b9667ca45/output.log",
  "shell_path": "/bin/bash",
  "accounting_id": "test0001",
  "queue_name": "qInt",
  "wall_time": 3600,
  "native": [
    "-N",
    1
  ],
  "email_on_started": false
}

For some reason, it does not get the correct CLI options. What else should I change? btw, I’m using SLURM.

I think you may have 2 things wrong. First, the yml isn’t indented correctly. script and batch_connect should be on the same indent (0th indent). See that below.

Secondly, I think the cli argument for cores in SLURM is -c --cpus-per-task. Though I could be wrong on that, I just quickly checked the docs, so I don’t know that for sure; you may need to tweak that or could be more familiar with SLURMs cli than me. What you’ve listed above looks like Torque’s cli.

batch_connect:
  template: "basic"
# script has same indent as batch_connect
script:
  # I think this is the slurm cli for nodes (-N) and cpus (--cpus-per-task)
  native:  "-N <%= bc_num_slots.blank? ? "1" : bc_num_slots.to_i %> --cpus-per-task=<%= ncpus %>" 

Hope that helps!

It worked :slight_smile: Thanks.