Skip to content
  • Search
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse

ZMT zurich med tech

  1. Home
  2. Sim4Life
  3. Anatomical Models
  4. Automatic Head Model including 1010-System / Electrode Placement

Automatic Head Model including 1010-System / Electrode Placement

Scheduled Pinned Locked Moved Anatomical Models
30 Posts 3 Posters 3.4k Views 3 Watching
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • L Offline
    L Offline
    lucky_lin
    wrote on last edited by lucky_lin
    #11

    The segmentation is indeed asymmetric. I used the IXI025-Guys-0852-T1.nii.gz provided in the tutorial. Is it because the T1W is asymmetric? I don't know much about brain structure; is this normal?
    image.png
    image.png
    f5af4cc9-8164-4745-b374-8d459a985b83-image.png

    1 Reply Last reply
    0
    • L Offline
      L Offline
      lucky_lin
      wrote on last edited by
      #12

      There are some holes in the other tissues.
      image.png

      brynB 1 Reply Last reply
      0
      • L lucky_lin

        There are some holes in the other tissues.
        image.png

        brynB Offline
        brynB Offline
        bryn
        ZMT
        wrote on last edited by
        #13

        @lucky_lin "Other tissues" is basically fat. I don't think this is an issue.

        Note, that the head is not perfectly aligned with the axis, i.e. your XY plane may not be perpendicular to the "up" direction. You can also change the slice by using the interactive slice instead of axis-aligned.

        Finally. Yes, it is normal that real subjects/people have asymmetrical features. If you are interested in the research question, you could test the impact of individual tissues (conductivity) by

        • assigning the same tissue property to all tissues (except for Air?). Is this nearly/more symmetrical?
        • assigning the same tissue property to all tissues, except for one tissue (and Air). Candidates are the thin layers around the brain, e.g. Dura, CSF, Brain grey matter, Skull cortical/cancellous, [Galea, Muscle, ...].
        1 Reply Last reply
        0
        • L Offline
          L Offline
          lucky_lin
          wrote on last edited by
          #14

          I don't quite understand why the T1W I imported is not aligned with the grid. In the GUI, the grid can be adjusted, but how can I ensure they are aligned in a script simulation?

          1 Reply Last reply
          0
          • L Offline
            L Offline
            lucky_lin
            wrote on last edited by lucky_lin
            #15

            @bryn I tried to set both the rotation and translation of T1W to 0 and then generate the reference point, but it reported an error: Modeler : [Error] Exception during import: Expecting 'Version 1.0' on first line Modeler : [Error] operation unsuccessful.

            1 Reply Last reply
            0
            • brynB Offline
              brynB Offline
              bryn
              ZMT
              wrote on last edited by bryn
              #16

              The T1w image is placed in world (scanner) coordinates. This is useful, e.g. to align different acquisitions (e.g. T1, T2 with different resolutions or field of view).

              You could remove the rotation and translation, e.g., by setting an identity transform. However, in my experience, it is often useful to preserve the position in world coordinates.

              img = XCoreModeling.Import("some_t1w_mri.nii.gz")
              img.Transform = XCoreModeling.Transform()
              
              L 1 Reply Last reply
              0
              • brynB bryn

                The T1w image is placed in world (scanner) coordinates. This is useful, e.g. to align different acquisitions (e.g. T1, T2 with different resolutions or field of view).

                You could remove the rotation and translation, e.g., by setting an identity transform. However, in my experience, it is often useful to preserve the position in world coordinates.

                img = XCoreModeling.Import("some_t1w_mri.nii.gz")
                img.Transform = XCoreModeling.Transform()
                
                L Offline
                L Offline
                lucky_lin
                wrote on last edited by
                #17

                @bryn I set the rotation and translation of T1W and the grid to 0, but I cannot generate the landmarks. [Error] Exception during import: Expecting 'Version 1.0' on first line Modeler : [Error] operation unsuccessful.

                1 Reply Last reply
                0
                • brynB Offline
                  brynB Offline
                  bryn
                  ZMT
                  wrote on last edited by bryn
                  #18

                  I don't understand what you are doing. The error looks like Sim4Life cannot parse the .pts file produced by the landmark predictor.

                  • did you edit the .pts file manually?
                  • where/how did you set the rotation/translation to "0"?
                  • did you try to set the transform [to zero] (before running the prediction) as suggested in my last post?
                  1 Reply Last reply
                  0
                  • L Offline
                    L Offline
                    lucky_lin
                    wrote on last edited by lucky_lin
                    #19

                    After importing T1W in the GUI, it is not aligned with the grid, so I adjusted the rotation and translation of T1W in the controller.
                    I did not modify the .pts file; I just made the changes above and then tried to generate the landmarks.
                    4510d0b9effdd5b2ee9881d8e14975e.png

                    1 Reply Last reply
                    0
                    • brynB Offline
                      brynB Offline
                      bryn
                      ZMT
                      wrote on last edited by
                      #20

                      But previously you managed to generate them, i.e. before you edited the transform?

                      1 Reply Last reply
                      0
                      • L Offline
                        L Offline
                        lucky_lin
                        wrote on last edited by
                        #21

                        No, I only did these things after creating a new file.

                        1 Reply Last reply
                        0
                        • brynB Offline
                          brynB Offline
                          bryn
                          ZMT
                          wrote on last edited by
                          #22

                          I can reproduce your issue. If I set the transform to "0", i.e. Identity, the predictor fails. The head40 segmentation is also less accurate! We need to investigate.

                          A workaround would be:

                          • load image
                          • predict landmarks & segmentation
                          • compute inverse image transform
                          • apply this inverse to landmarks/segmentation/surfaces/etc
                          # assumes verts and labelfield are already predicted (without setting transform to "0")
                          inv_tx = img.Transform.Inverse()
                          
                          # transform segmentation
                          labelfield.ApplyTransform(inv_tx )
                          
                          # transform landmarks
                          for v in verts:
                              v.ApplyTransform(inv_tx )
                          
                          1 Reply Last reply
                          0
                          • brynB Offline
                            brynB Offline
                            bryn
                            ZMT
                            wrote on last edited by
                            #23

                            The issue is that our neural network was trained with the data in the RAS orientation (with some deviation +- 15 degrees, and flipping in all axes). If you manually edit the transform, you break assumptions used to pre-orient the data into RAS ordering.

                            Since RAS is a widely used convention in neuroscience, and medical images are always acquired with a direction (rotation) matrix and offset (translation), I think it is best you don't modify the transform.

                            For instance, if you try to assign DTI-based conductivity maps - you will need to rotate the grid AND the tensors accordingly. It can be done, but it will be more effort...

                            If this is to investigate if the fields are (nearly) symmetric, I suggest you

                            • find an approximate symmetry plane (wrt to the brain or skull or ...)
                            • align the plane of a slice viewer perpendicular to the symmetry plane
                            1 Reply Last reply
                            0
                            • brynB bryn

                              The latest release 8.2 includes a new function to predict the landmarks needed to place the 10-10-system on the skin: Predict1010SystemLandmarks
                              The landmarks are the nasion, inion, and left/right pre-auricular points. Sim4Life now can predict these directly from a T1w MRI.

                              The following script demonstrates the whole process:

                              from ImageML import Predict1010SystemLandmarks
                              from s4l_v1.model import Vec3, Import, Create1010System, PlaceElectrodes, CreateSolidCylinder
                              from s4l_v1.model.image import HeadModelGeneration, ExtractSurface
                              
                              img = Import(r"D:\datasets\IXI-T1\IXI021-Guys-0703-T1.nii.gz")[0]
                              
                              # segment head, skip adding dura, 
                              labelfield = HeadModelGeneration([img], output_spacing=0.6, add_dura=False)
                              
                              # extract surfaces from segmentation
                              surfaces = ExtractSurface(labelfield)
                              surfaces_dict = {e.Name: e for e in surfaces}
                              skin = surfaces_dict["Skin"]
                              
                              # predict landmarks, the function returns a list of Vertex entities
                              verts = Predict1010SystemLandmarks(img)
                              pts = {e.Name: e.Position for e in verts}
                              eeg1010_group = Create1010System(skin, Nz=pts["Nz"], Iz=pts["Iz"], RPA=pts["RPA"], LPA=pts["LPA"])
                              eeg1010_dict = {e.Name: e for e in eeg1010_group.Entities}
                              
                              # create template electrode and place it at C3 position
                              electrode_template = CreateSolidCylinder(Vec3(0), Vec3(0,0,5), radius=10)
                              electrodes = PlaceElectrodes([electrode_template], [eeg1010_dict["C3"]])
                              

                              For the image used in this example, the result looks like this:

                              4b86e97a-5c74-4e98-8b56-386c0b967ecf-image.png

                              L Offline
                              L Offline
                              lucky_lin
                              wrote on last edited by
                              #24

                              @bryn Does this code only segment 40 types of tissues by default? I want to segment 16 types.

                              1 Reply Last reply
                              0
                              • brynB Offline
                                brynB Offline
                                bryn
                                ZMT
                                wrote on last edited by
                                #25

                                The default is 40 tissues. To be explicit you can specify this via

                                import ImageML
                                
                                labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head40)
                                

                                For 30 (or 16) tissues you would specify the version head30 (or head16)

                                import ImageML
                                
                                labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head30)
                                

                                But please note: the versions are an evolution. The head16 segmentation is not the same, with fewer tissues. It is also less accurate, as it was the first version we published (and trained on less training data).

                                L 1 Reply Last reply
                                0
                                • brynB bryn

                                  The default is 40 tissues. To be explicit you can specify this via

                                  import ImageML
                                  
                                  labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head40)
                                  

                                  For 30 (or 16) tissues you would specify the version head30 (or head16)

                                  import ImageML
                                  
                                  labelfield = ImageML.HeadModelGeneration([img], output_spacing=0.6, add_dura=False, version=ImageML.eHeadModel.head30)
                                  

                                  But please note: the versions are an evolution. The head16 segmentation is not the same, with fewer tissues. It is also less accurate, as it was the first version we published (and trained on less training data).

                                  L Offline
                                  L Offline
                                  lucky_lin
                                  wrote on last edited by
                                  #26

                                  @bryn Thank you very much for your response! I have a question: What is the difference between constructing a head model using T1-weighted (T1W) and T2-weighted (T2W) images and constructing a head model using only T1W images? Why can only 16 types of tissues be segmented when using T1W and T2W images?

                                  brynB 1 Reply Last reply
                                  0
                                  • L lucky_lin

                                    @bryn Thank you very much for your response! I have a question: What is the difference between constructing a head model using T1-weighted (T1W) and T2-weighted (T2W) images and constructing a head model using only T1W images? Why can only 16 types of tissues be segmented when using T1W and T2W images?

                                    brynB Offline
                                    brynB Offline
                                    bryn
                                    ZMT
                                    wrote on last edited by
                                    #27

                                    @lucky_lin In our first version of the head segmenation (head16) we trained with a smaller dataset, where T1w and T2w was available. We trained two networks, one with just T1w as input, one that gets T1w + T2w as input.

                                    In our later work we extended the training data, but only have T1w images. Therefore, the head30 and head40 only needs a T1w image.

                                    L 2 Replies Last reply
                                    0
                                    • brynB bryn

                                      @lucky_lin In our first version of the head segmenation (head16) we trained with a smaller dataset, where T1w and T2w was available. We trained two networks, one with just T1w as input, one that gets T1w + T2w as input.

                                      In our later work we extended the training data, but only have T1w images. Therefore, the head30 and head40 only needs a T1w image.

                                      L Offline
                                      L Offline
                                      lucky_lin
                                      wrote on last edited by
                                      #28

                                      @bryn Okay, I understand ^^

                                      1 Reply Last reply
                                      0
                                      • brynB bryn

                                        The latest release 8.2 includes a new function to predict the landmarks needed to place the 10-10-system on the skin: Predict1010SystemLandmarks
                                        The landmarks are the nasion, inion, and left/right pre-auricular points. Sim4Life now can predict these directly from a T1w MRI.

                                        The following script demonstrates the whole process:

                                        from ImageML import Predict1010SystemLandmarks
                                        from s4l_v1.model import Vec3, Import, Create1010System, PlaceElectrodes, CreateSolidCylinder
                                        from s4l_v1.model.image import HeadModelGeneration, ExtractSurface
                                        
                                        img = Import(r"D:\datasets\IXI-T1\IXI021-Guys-0703-T1.nii.gz")[0]
                                        
                                        # segment head, skip adding dura, 
                                        labelfield = HeadModelGeneration([img], output_spacing=0.6, add_dura=False)
                                        
                                        # extract surfaces from segmentation
                                        surfaces = ExtractSurface(labelfield)
                                        surfaces_dict = {e.Name: e for e in surfaces}
                                        skin = surfaces_dict["Skin"]
                                        
                                        # predict landmarks, the function returns a list of Vertex entities
                                        verts = Predict1010SystemLandmarks(img)
                                        pts = {e.Name: e.Position for e in verts}
                                        eeg1010_group = Create1010System(skin, Nz=pts["Nz"], Iz=pts["Iz"], RPA=pts["RPA"], LPA=pts["LPA"])
                                        eeg1010_dict = {e.Name: e for e in eeg1010_group.Entities}
                                        
                                        # create template electrode and place it at C3 position
                                        electrode_template = CreateSolidCylinder(Vec3(0), Vec3(0,0,5), radius=10)
                                        electrodes = PlaceElectrodes([electrode_template], [eeg1010_dict["C3"]])
                                        

                                        For the image used in this example, the result looks like this:

                                        4b86e97a-5c74-4e98-8b56-386c0b967ecf-image.png

                                        L Offline
                                        L Offline
                                        lucky_lin
                                        wrote on last edited by
                                        #29

                                        @bryn Hello, if I use script, can I clone an already set up simulation and then make partial modifications to the settings?

                                        1 Reply Last reply
                                        0
                                        • brynB bryn

                                          @lucky_lin In our first version of the head segmenation (head16) we trained with a smaller dataset, where T1w and T2w was available. We trained two networks, one with just T1w as input, one that gets T1w + T2w as input.

                                          In our later work we extended the training data, but only have T1w images. Therefore, the head30 and head40 only needs a T1w image.

                                          L Offline
                                          L Offline
                                          lucky_lin
                                          wrote on last edited by
                                          #30

                                          @bryn Is the 6.8 displayed on the color bar the actual maximum field strength value? I exported the values and found that the maximum is around 5.9 instead. 42fb974e-d830-42c2-8373-84c85ee63339-image.png

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Don't have an account? Register

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Search