Parent child app: Total intensity + grouping / Transfection efficiency
Hi forum,
I was hoping you could help with a question I have:
Generally speaking, I am trying to define bright ROIs in my parent image. Then, I'd like to analyse the total intensities of these areas in the children. Ideally, I'd like to define thresholds for the total intensities, i.e. if a child is above a certain threshold it is classed as group 'x'.
More specifically, I'm trying to use the Parent-Child app to analyse transfection efficiencies. I use nuclear stainings as the parent, and grow the masks in the segmentation options tab to cover most of the cells' area. So far, I only found a way to read out the average intensities in the child channel. Is there an option to get total values for the area? Also, is it possible to define thresholds, which can be used to further group the values from the children? For example, let's say 100 cells were detected in the parent channel, could I automatically see how many of these had intensity values greater than X in the child channel.
Please let me know if there are easier ways to analyse transfection efficiencies.
Hope this makes sense and thank you
Henri
I was hoping you could help with a question I have:
Generally speaking, I am trying to define bright ROIs in my parent image. Then, I'd like to analyse the total intensities of these areas in the children. Ideally, I'd like to define thresholds for the total intensities, i.e. if a child is above a certain threshold it is classed as group 'x'.
More specifically, I'm trying to use the Parent-Child app to analyse transfection efficiencies. I use nuclear stainings as the parent, and grow the masks in the segmentation options tab to cover most of the cells' area. So far, I only found a way to read out the average intensities in the child channel. Is there an option to get total values for the area? Also, is it possible to define thresholds, which can be used to further group the values from the children? For example, let's say 100 cells were detected in the parent channel, could I automatically see how many of these had intensity values greater than X in the child channel.
Please let me know if there are easier ways to analyse transfection efficiencies.
Hope this makes sense and thank you
Henri
0
Best Answer
-
Hi Henri,
You can split the object by Red seeds if you use restricted Grow (the attached PCP uses Grow=100), Red objects will grow without overlapping within parent outlines.
The result looks like this:
Yuri0
Answers
I think all of these is possible, you just need to configure measurement options and measurement types for your needs. Can you please send a sample image and some description (drawings) what you want to measure and I will try to show how it can be done?
Yuri
I've attached a screenshot of what I've got so far:
I'm detecting the cell nuclei (pink) and set them as the parent. I grew the mask by 12 to be able to measure the surrounding signal in my red channel. I've thresholded the red (child) channel to detect the whole image - I'm not sure if setting a threshold to disregard some of the background would be better?!
Using the mask from my parent, I am getting 16 counts (shown in yellow rectangle). For example, count P1R15 has a mean intensity of 84.04 in my red channel.
The issue I'm having now is that I can't find a measurement type to analyse total intensity in the red channel for each of my 16 counts. Also, I would like to be able to classify the 16 counts based on their intensity in the red channel. For example, if the value is higher than 84, it should be classified as 'high'. If lower than 84, it should be classified 'low'.
I've attached a composite tif file for you to test. It contains the nuclear stain (parent) channel, and 2 potential child channels. There is some bleed-through in the child channels, which shouldn't matter too much for the purpose of this proof of principle.
Thanks again for your help,
Henri
Thanks for the image. I checked it and it's a 3-frame sequence. On your screenshot I can see different images (DAPI, GFP, TX Red). Can you please send me those? And also the PCP file (saved from Parent/Child app), which would contain all the options you used, so I can start from the point where you already are.
Thanks,
Yuri
Please see attached. I've attached 2 images - 1 parent (DAPI) and 1 child (red).
When I try uploading the .pcp file your website says 'File format is not allowed.' I've changed the ending of the file to txt, hopefully you could use it by renaming to pcp.
Thanks,
Henri
I used your images and created a new experiment that measures "Integrated OD Red" (Integrated Optical Density Red). (measurement types for the final result are added to the options of the last child). I've also added "Intensity Red (mean)", just in case.
It's shown per object, per parent or total (statistics pane, Sum).
Please check the attached PCP (in a zip) and let me know if that's what you need.
Yuri
Thank you so much for your input. This has been very helpful already in learning how to use the software better.
Yuri, integrated OD looks like the measurement I was looking for. However, Parent-Child app might not be the best way to analyse my images.
I think the features manager is great for what I am trying to achieve. And yes, ideally I would like to use the boundaries from red channel. However, every round object in DAPI equals one cell and needs to be counted separately. Is there a way to take the boundary masks from the red channel and split these based on the DAPI masks (in a Voronoi sort of way)? For Example, in your image, Matt, P1R2 contains 3 DAPI objects in it and should be split into 3.
This is not crucial if it is too complicated.
The more important part for me would be to group measurements according to their integrated OD. I've attached an image with an attempt to threshold my ROIs, which leads to 2 different classes. However, I'm not sure how they are classed: most cells have some red and green areas, but are ultimately classed into either red or green.
Thanks again for all your efforts!
I've done a 180 and went back to using Parent-Child. I've combined the segmentation from both images, but now, I've used learning classification to create a 'Negative' and 'Positive' Class. The attached image is an easier image to analyse as the 2 channels occupy the same regions, but it works very well.
It would still be great to be able to change the masks in cases like the image above, when a bigger area (red) should be split into multiple objects based on a second channel (DAPI). Maybe you know of a way to do this?
Otherwise, this seems to be working pretty well
Thanks a lot!