Skip to content
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

ZMT zurich med tech

  1. Home
  2. Sim4Life
  3. Python API
  4. Importing electrode positions recorded in RAS coordinates

Importing electrode positions recorded in RAS coordinates

Scheduled Pinned Locked Moved Python API
3 Posts 2 Posters 58 Views 2 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • T Offline
    T Offline
    tbone
    wrote last edited by
    #1

    Hello

    I am trying to load electrode coordinates with a subject's T1w image.
    The electrodes were recorded in RAS coordinates (origin point: defined by the MRI scanner).

    From the forum I've seen that the T1w is placed in world (scanner) coordinates, so I thought using the exact values would place the positioning on the expected 10-10 location.

    However, the positions were way off which led to using the image transform as such:

    nii = nib.load("Image.nii.gz")
    affine = nii.affine
    inv_affine = np.linalg.inv(affine)
    
    ras_x = a
    ras_y = b
    ras_z = c
    
    ras_h = np.array([ras_x , ras_y, ras_z, 1])
    voxel = inv_affine @ ras_h
    voxel_vec = model.Vec3(voxel[0], voxel[1], voxel[2])
    world_pos = t1w.Transform * voxel_vec
    electrode = model.CreateSolidSphere(world_pos, 10.0)
    

    But even with this, I could not correctly register my electrode positions.

    I would greatly appreciate any help on this! Thank you for your time in advance

    1 Reply Last reply
    0
    • brynB Offline
      brynB Offline
      bryn
      ZMT
      wrote last edited by
      #2

      Yes the image is positioned in world (scanner coordinates). By default the image is imported in LPS (not RAS like nibabel). However, if you look in the import dialog there is a drop down to select the "Voxel Ordering" (e.g. RAS).

      I am not sure what your ras_x, ras_y, ras_z coordinates represent, but assuming this is the voxel index, you should be able to use the following api from the XCoreModeling.Image

      import numpy as np
      import XCoreModeling
      
      
      def convert_ras_to_phyiscal_coords(image: XCoreModeling.Image, index_ras: list[int]) -> XCoreModeling.Vec3:
          spacing = image.Spacing
          p = XCoreModeling.Vec3(spacing[0] * index_ras[0], spacing[1] * index_ras[1], spacing[2] * index_ras[2])
          return image.Transform(p)
      
      
      if __name__ == "__main__":
          # get the select image entity
          image = XCoreModeling.GetActiveModel().SelectedEntities[0]
          p = convert_ras_to_phyiscal_coords(image, [110, 120, 130])	
      	
          XCoreModeling.CreateSolidSphere(p, radius=5)
      
          # sanity check: we set the value at 'p' to a unique value, e.g., -101
          image.SetValueAt(p, -101)
          # the voxel with value -100 should have the index reversed([130, 120, 110])
          # why reversed? this follows the convention used by ITK "ITK index via [i,j,k] and numpy index via [k,j,i]"
      	# https://discourse.itk.org/t/why-to-invert-the-order-of-the-indices-between-numpy-and-simpleitk/809/8
          print(np.where(image.Image == -101))
      

      Note, when you select the image you can start a tool Image Tools -> Annotate which allows you to pick positions on the image slices in Sim4Life. The physical coordinates (and image value/label) will be printed in the console. You can also export the points to a csv file (tool allows you to choose if you want index, coordinates and/or value).

      Note also, when you load the affine using nibabel you will get the affine after reorienting the image to RAS (nibabel always does this). You can use this in Sim4Life if your index really corresponds to RAS ordered image data. The api to conervt from index to coordinates would be

      from nibabel.affines import apply_affine
      
      scanner_coords = apply_affine(img.affine, [i, j, k])
      
      # or
      affine = img.affine  # 4x4 matrix
      
      ijk = np.array([i, j, k, 1])
      xyz = affine @ ijk
      
      scanner_coords = xyz[:3]
      

      The latter is similar to what you posted, but for some reason you then apply the inverse affine (which should give you your original a, b, c)?

      T 1 Reply Last reply
      0
      • brynB bryn

        Yes the image is positioned in world (scanner coordinates). By default the image is imported in LPS (not RAS like nibabel). However, if you look in the import dialog there is a drop down to select the "Voxel Ordering" (e.g. RAS).

        I am not sure what your ras_x, ras_y, ras_z coordinates represent, but assuming this is the voxel index, you should be able to use the following api from the XCoreModeling.Image

        import numpy as np
        import XCoreModeling
        
        
        def convert_ras_to_phyiscal_coords(image: XCoreModeling.Image, index_ras: list[int]) -> XCoreModeling.Vec3:
            spacing = image.Spacing
            p = XCoreModeling.Vec3(spacing[0] * index_ras[0], spacing[1] * index_ras[1], spacing[2] * index_ras[2])
            return image.Transform(p)
        
        
        if __name__ == "__main__":
            # get the select image entity
            image = XCoreModeling.GetActiveModel().SelectedEntities[0]
            p = convert_ras_to_phyiscal_coords(image, [110, 120, 130])	
        	
            XCoreModeling.CreateSolidSphere(p, radius=5)
        
            # sanity check: we set the value at 'p' to a unique value, e.g., -101
            image.SetValueAt(p, -101)
            # the voxel with value -100 should have the index reversed([130, 120, 110])
            # why reversed? this follows the convention used by ITK "ITK index via [i,j,k] and numpy index via [k,j,i]"
        	# https://discourse.itk.org/t/why-to-invert-the-order-of-the-indices-between-numpy-and-simpleitk/809/8
            print(np.where(image.Image == -101))
        

        Note, when you select the image you can start a tool Image Tools -> Annotate which allows you to pick positions on the image slices in Sim4Life. The physical coordinates (and image value/label) will be printed in the console. You can also export the points to a csv file (tool allows you to choose if you want index, coordinates and/or value).

        Note also, when you load the affine using nibabel you will get the affine after reorienting the image to RAS (nibabel always does this). You can use this in Sim4Life if your index really corresponds to RAS ordered image data. The api to conervt from index to coordinates would be

        from nibabel.affines import apply_affine
        
        scanner_coords = apply_affine(img.affine, [i, j, k])
        
        # or
        affine = img.affine  # 4x4 matrix
        
        ijk = np.array([i, j, k, 1])
        xyz = affine @ ijk
        
        scanner_coords = xyz[:3]
        

        The latter is similar to what you posted, but for some reason you then apply the inverse affine (which should give you your original a, b, c)?

        T Offline
        T Offline
        tbone
        wrote last edited by tbone
        #3

        @bryn Thank you for your detailed response!
        I had overlooked that Sim4Life loads images in LPS orientation, but using your function I was able to correctly place the electrodes at the intended coordinates.

        I was wondering as well, if I wanted to compute the resulting E-fields per ROI based on the Brainnetome Atlas (BNA), what would be the recommended way?

        Would it be better to:

        1. Interpolate the E-fields onto the T1w image like you have mentioned here https://forum.zmt.swiss/topic/735/the-shape-of-the-t1-image-and-the-shape-of-the-electric-field-are-different/18, then warp the BNA atlas into the subject's T1w space?
          Or
        2. Export the E-field as '.vtk', and then use the information directly to group field values according to atlas labels (labels are warped BNA-> subject):
        mesh = pv.read(r"MaxModulation_Efield.vtk")
        mesh_p = mesh.cell_data_to_point_data() # only cell data exists
        
        points = mesh_p.points * 1000 # mesh (m) to (mm)
        points_ras = points.copy()
        
        # LPS to RAS
        points_ras[:, 0] = -points[:, 0] # Left <-> Right
        points_ras[:, 1] = -points[:, 1] # Post <-> Ant
        
        # Mesh (world coords) to atlas voxel indices
        points_h = np.c_[points_ras, np.ones(len(points_ras))]
        voxels = (inv_affine @ points_h.T).T.astype(int)
        

        I would really appreciate your feedback on how to accurately compute this.
        Thank you again for your help!

        1 Reply Last reply
        0
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        • First post
          Last post
        0
        • Search