Skip to content
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

ZMT zurich med tech

  1. Home
  2. Sim4Life
  3. Analysis & Postprocessing
  4. Extract EMLF Results via Jupyter

Extract EMLF Results via Jupyter

Scheduled Pinned Locked Moved Analysis & Postprocessing
6 Posts 2 Posters 106 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • C Offline
    C Offline
    cbenj33
    wrote last edited by
    #1

    How do we extract EMLF results via Jupyter? I'd specifically like to know if there is a way to generate images of slice viewers via the API. I can see how to extract the data cache from an analysis, but I cannot seem to access results contained within the data cache, and I cannot find any documentation that supports how to do this.

    For context, I am running custom parameter sweeps via Jupyter. Every time we begin a new EMLF simulation with new settings, it deletes the results from the previous simulation. Automating the data extraction process is thereby very important for programmatic analysis downstream.

    1 Reply Last reply
    0
    • brynB Offline
      brynB Offline
      bryn
      ZMT
      wrote last edited by
      #2

      To learn how to do this using the Python API, the most straightforward approach is to run the simulation, set up the analysis pipeline, and select, e.g., the slice field viewer and run the "To Python (selected)". This will generate the script and open it in the "Scripter" window.

      You can then edit and copy this code to a cell in your notebook.

      Many of the Jupyter notebook tutorials under Help -> Examples will run a simulation and extract results. Consider browsing through these to find examples most similar to your needs.

      1 Reply Last reply
      0
      • C Offline
        C Offline
        cbenj33
        wrote last edited by
        #3

        Thanks @bryn, I had not seen the s4l.renderer package, it was somewhat hidden in the API browser.
        One question; is it possible to rotate a view with the API? I've been using renderer.SetViewDirection(Vec3(0, 0, -1)) to get a topdown view of my model, but I'd like to flip the view 90 degrees to the left before rendering an image.

        This is what I'm after:
        test-001.png

        This is what I'm getting:

        10_hz_elecconfig_1_current_5mA_viewer_xy_jmagsliceviewer-002.png

        1 Reply Last reply
        0
        • brynB Offline
          brynB Offline
          bryn
          ZMT
          wrote last edited by bryn
          #4

          Good question. The rotation of the camera around the view direction is not so easy to specify in the API.

          I can suggest the following workaround, in your target image the 'right' direction is negative Y

          cam = XRenderer.CameraSettings() # create a camera
          cam.Distance = 100  # instead, you could zoom to the bounding box, as shown in the Jupyter notebook examples
          cam.SetViewDirection(Vec3(0,0,-1), Vec3(0,-1,0))  # specify view direction and 'right' direction
          XRenderer.RestoreCamera(cam)  # tell the GUI to use these camera settings 
          
          C 1 Reply Last reply
          0
          • C Offline
            C Offline
            cbenj33
            wrote last edited by cbenj33
            #5

            To follow up on my original question, is there any documentation or examples for extracting results from the exported data cache file?

            1 Reply Last reply
            0
            • brynB bryn

              Good question. The rotation of the camera around the view direction is not so easy to specify in the API.

              I can suggest the following workaround, in your target image the 'right' direction is negative Y

              cam = XRenderer.CameraSettings() # create a camera
              cam.Distance = 100  # instead, you could zoom to the bounding box, as shown in the Jupyter notebook examples
              cam.SetViewDirection(Vec3(0,0,-1), Vec3(0,-1,0))  # specify view direction and 'right' direction
              XRenderer.RestoreCamera(cam)  # tell the GUI to use these camera settings 
              
              C Offline
              C Offline
              cbenj33
              wrote last edited by
              #6

              Nice one @bryn , that seems to work well. Thank you.

              1 Reply Last reply
              0
              Reply
              • Reply as topic
              Log in to reply
              • Oldest to Newest
              • Newest to Oldest
              • Most Votes


              • Login

              • Don't have an account? Register

              • Login or register to search.
              • First post
                Last post
              0
              • Search