<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3.dtd">
<article article-type="research-article" dtd-version="1.3" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xml:lang="ru"><front><journal-meta><journal-id journal-id-type="publisher-id">regmedjournal</journal-id><journal-title-group><journal-title xml:lang="ru">Регенерация органов и тканей</journal-title><trans-title-group xml:lang="en"><trans-title>Регенерация органов и тканей</trans-title></trans-title-group></journal-title-group><issn pub-type="epub">2949-5938</issn><publisher><publisher-name>Общество регенеративной медицины</publisher-name></publisher></journal-meta><article-meta><article-id pub-id-type="doi">10.60043/2949-5938-2025-1-31-40</article-id><article-id custom-type="elpub" pub-id-type="custom">regmedjournal-102</article-id><article-categories><subj-group subj-group-type="heading"><subject>Research Article</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="ru"><subject>ОРИГИНАЛЬНЫЕ СТАТЬИ</subject></subj-group><subj-group subj-group-type="section-heading" xml:lang="en"><subject>ORIGINAL ARTICLES</subject></subj-group></article-categories><title-group><article-title>Полуавтоматический метод аннотирования фазово-контрастных изображений живых культур клеток для сегментации ядер на основе машинного обучения</article-title><trans-title-group xml:lang="en"><trans-title>Semi-automatic method of annotating phase-contrast images of live cell cultures for nuclei segmentation based on machine learning</trans-title></trans-title-group></title-group><contrib-group><contrib contrib-type="author" corresp="yes"><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Балясин</surname><given-names>М. В.</given-names></name><name name-style="western" xml:lang="en"><surname>Balyasin</surname><given-names>M. V.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Балясин Максим Витальевич — младший научный сотрудник Научно-образовательного ресурсного центра «Клеточные технологии» ФГАОУ ВО «РУДН им. Патриса Лумумбы»; научный сотрудник Лаборатории мутагенеза ФГБНУ «МГНЦ им. акад. Н.П. Бочкова».</p><p>117198, Москва, ул. Миклухо-Маклая, д. 6; 115522, Москва, ул. Москворечье, д. 1</p></bio><bio xml:lang="en"><p>Maxim V. Balyasin — Junior Researcher, Scientific and Educational Resource Center “Cell Technologies”, RUDN University; Researcher, Laboratory of Mutagenesis, Research Centre for Medical Genetics.</p><p>117198, Moscow, Miklukho-Maklaya str., 6; 115522, Moscow, Moskvorechye str., 1</p></bio><email xlink:type="simple">balyasin_mv@rudn.ru</email><xref ref-type="aff" rid="aff-1"/></contrib><contrib contrib-type="author" corresp="yes"><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Демченко</surname><given-names>А. Г.</given-names></name><name name-style="western" xml:lang="en"><surname>Demchenko</surname><given-names>A. G.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Демченко Анна Григорьевна — к.б.н., старший научный сотрудник Лаборатории редактирования генома.</p><p>115522, Москва, ул. Москворечье, д. 1</p></bio><bio xml:lang="en"><p>Anna G. Demchenko — Cand. Sci. (Biology), Senior Researcher, Laboratory of Genome Editing, Research Centre for Medical Genetics.</p><p>115522, Moscow, Moskvorechye str., 1</p></bio><xref ref-type="aff" rid="aff-2"/></contrib><contrib contrib-type="author" corresp="yes"><name-alternatives><name name-style="eastern" xml:lang="ru"><surname>Люндуп</surname><given-names>А. В.</given-names></name><name name-style="western" xml:lang="en"><surname>Lyundup</surname><given-names>A. V.</given-names></name></name-alternatives><bio xml:lang="ru"><p>Люндуп Алексей Валерьевич — к.м.н., директор Научно-образовательного ресурсного центра «Клеточные технологии» ФГАОУ ВО «РУДН им Патриса Лумумбы»; ведущий научный сотрудник Лаборатории мутагенеза ФГБНУ «МГНЦ им. акад. Н.П. Бочкова».</p><p>117198, Москва, ул. Миклухо-Маклая, д. 6; 115522, Москва, ул. Москворечье, д. 1</p></bio><bio xml:lang="en"><p>Alexey V. Lyundup — Cand. Sci. (Medicine), Director, Scientific and Educational Resource Center “Cell Technologies”, RUDN University; Leading Researcher, Laboratory of Mutagenesis, Research Centre for Medical Genetics.</p><p>117198, Moscow, Miklukho-Maklaya str., 6; 115522, Moscow, Moskvorechye str., 1</p></bio><xref ref-type="aff" rid="aff-1"/></contrib></contrib-group><aff-alternatives id="aff-1"><aff xml:lang="ru">ФГАОУ ВО «Российский университет дружбы народов имени Патриса Лумумбы»; ФГБНУ «Медико-генетический научный центр имени академика Н.П. Бочкова»<country>Россия</country></aff><aff xml:lang="en">Peoples’ Friendship University of Russia named after Patrice Lumumba (RUDN University); Research Centre for Medical Genetics<country>Russian Federation</country></aff></aff-alternatives><aff-alternatives id="aff-2"><aff xml:lang="ru">ФГБНУ «Медико-генетический научный центр имени академика Н.П. Бочкова»<country>Россия</country></aff><aff xml:lang="en">Research Centre for Medical Genetics<country>Russian Federation</country></aff></aff-alternatives><pub-date pub-type="collection"><year>2025</year></pub-date><pub-date pub-type="epub"><day>05</day><month>04</month><year>2026</year></pub-date><volume>3</volume><issue>1</issue><fpage>31</fpage><lpage>40</lpage><permissions><copyright-statement>Copyright &amp;#x00A9; Балясин М.В., Демченко А.Г., Люндуп А.В., 2026</copyright-statement><copyright-year>2026</copyright-year><copyright-holder xml:lang="ru">Балясин М.В., Демченко А.Г., Люндуп А.В.</copyright-holder><copyright-holder xml:lang="en">Balyasin M.V., Demchenko A.G., Lyundup A.V.</copyright-holder><license license-type="creative-commons-attribution" xlink:href="https://creativecommons.org/licenses/by/4.0/" xlink:type="simple"><license-p>This work is licensed under a Creative Commons Attribution 4.0 License.</license-p></license></permissions><self-uri xlink:href="https://www.regmed-journal.ru/jour/article/view/102">https://www.regmed-journal.ru/jour/article/view/102</self-uri><abstract><p>В текущей работе разработан комплексный подход для определения ядер живых клеток на изображениях без флуоресцентных меток. Поскольку в клеточной биологии актуальными являются подсчет клеток, оценка динамики роста клеток и конфлюентности, то существует целесообразность в автоматизации получения этих данных. Для автоматизации применяют алгоритмы на основе машинного обучения, которые необходимо обучать на изображениях конкретных культур клеток. Обучение алгоритмов является трудоемким процессом и требует длительной ручной разметки. Также доступные методы анализа на основе машинного обучения обладают низкой точностью определения живых клеток без флуоресцентной окраски. Цель исследования: упростить создание набора данных аннотированных клеток с последующим обучением алгоритмов на изображениях живых культур клеток. Материалы и методы. Методика включала использование сверточных нейронных сетей на основе алгоритма для сегментации ядер клеток на флуоресцентных и гистологических изображениях StarDist. Для создания аннотированных фазово-контрастных изображений культуры клеток образцы окрашивали ядерным флуоресцентным красителем DAPI с последующей отбраковкой некачественных изображений при помощи классификации в программе Cellprofiler Analyst. Обучение модели на основе StarDist проводили на 1130 изображениях автоматически аннотированных ядер на фазово-контрастных изображениях культуры эпителиальных клеток респираторного тракта человека, полученных на объективе 10×, размером 1600×1200 пикселей и глубиной цвета 16 бит. Результаты исследования. Полученная модель показала хорошую точность (F1 = 0,765) сегментации ядер на валидационном наборе данных. Модель применили для определения времени удвоения популяции культуры эпителиальных клеток. Заключение: разработанный подход позволил создать аннотации и обучить модель машинного обучения для получения данных без применения флуоресцентных меток «label-free» на живых культурах клеток.</p></abstract><trans-abstract xml:lang="en"><p>This study developed a comprehensive approach for identifying live cell nuclei in images without fluorescent labels. Since cell biology involves counting cells and assessing cell growth dynamics and confluence, it is expedient to automate the collection of this data. Machine learning algorithms are used for automation, which must be trained on images of specific cell cultures. Training algorithms is a labor-intensive process and requires lengthy manual annotation. Also, available machine learning-based analysis methods have low accuracy in identifying living cells without fluorescent staining. Aim of the study. To simplify the creation of a dataset of annotated cells with subsequent training of algorithms on images of living cell cultures. Materials and methods. The methodology involved the use of convolutional neural networks based on an algorithm for segmenting cell nuclei in fluorescent and histological images using StarDist. To create annotated phase-contrast images of cell cultures, samples were stained with the nuclear fluorescent dye DAPI, followed by the rejection of poor-quality images using classification in the Cellprofiler Analyst program. The StarDist-based model was trained on 1,130 images of automatically annotated nuclei in phase-contrast images of human respiratory tract epithelial cell cultures, obtained with a 10x lens, 1,600x1,200 pixels in size, and 16-bit color depth. Results. The resulting model showed good accuracy (F1 = 0.765) in segmenting nuclei on the validation dataset. The model was used to determine the population doubling time of the epithelial cell culture population. Conclusion. The developed approach made it possible to create annotations and train a machine learning model to obtain data without the use of fluorescent labels (“label-free”) on live cell cultures.</p></trans-abstract><kwd-group xml:lang="ru"><kwd>машинное обучение</kwd><kwd>cверточные нейронные сети</kwd><kwd>сегментация ядер</kwd><kwd>фазово-контрастные изображения</kwd><kwd>label-free</kwd></kwd-group><kwd-group xml:lang="en"><kwd>machine learning</kwd><kwd>convolutional neural networks</kwd><kwd>nuclear segmentation</kwd><kwd>phasecontrast images</kwd><kwd>label-free</kwd></kwd-group><funding-group xml:lang="ru"><funding-statement>Работа выполнена в рамках государственного задания Министерства науки и высшего образования Российской Федерации для федерального государственного бюджетного научного учреждения «Медико-генетический научный центр имени академика Н.П. Бочкова».</funding-statement></funding-group><funding-group xml:lang="en"><funding-statement>This work was carried out within the framework of the state assignment of the Ministry of Science and Higher Education of the Russian Federation for the Research Centre for Medical Genetics named after Academician N.P. Bochkov.</funding-statement></funding-group></article-meta></front><back><ref-list><title>References</title><ref id="cit1"><label>1</label><citation-alternatives><mixed-citation xml:lang="ru">Ayanzadeh A., Yağar H.O., Özuysal Ö.Y., Okvur D.P., Töreyin B.U., Ünay D., Önal S. Cell segmentation of 2D phase-contrast microscopy images with deep learning method // 2019 Medical Technologies Congress (TIPTEKNO). IEEE, 2019. P. 1–4.</mixed-citation><mixed-citation xml:lang="en">Ayanzadeh A., Yağar H.O., Özuysal Ö.Y., Okvur D.P., Töreyin B.U., Ünay D., Önal S. Cell segmentation of 2D phase-contrast microscopy images with deep learning method // 2019 Medical Technologies Congress (TIPTEKNO). IEEE, 2019. P. 1–4.</mixed-citation></citation-alternatives></ref><ref id="cit2"><label>2</label><citation-alternatives><mixed-citation xml:lang="ru">Kirillov A., Mintun E., Ravi N., Mao H., Rolland C., Gustafson L., et al. Segment anything // Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. P. 4015–4026.</mixed-citation><mixed-citation xml:lang="en">Kirillov A., Mintun E., Ravi N., Mao H., Rolland C., Gustafson L., et al. Segment anything // Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023. P. 4015–4026.</mixed-citation></citation-alternatives></ref><ref id="cit3"><label>3</label><citation-alternatives><mixed-citation xml:lang="ru">Kakumani A.K., Padma Sree L. A Deep Learning Approach for Segmenting Time-Lapse Phase Contrast Images of NIH 3T3 Fibroblast Cells // New Trends in Computational Vision and Bio-inspired Computing: Selected works presented at the ICCVBIC 2018, Coimbatore, India. Cham: Springer International Publishing, 2020. P. 855–862.</mixed-citation><mixed-citation xml:lang="en">Kakumani A.K., Padma Sree L. A Deep Learning Approach for Segmenting Time-Lapse Phase Contrast Images of NIH 3T3 Fibroblast Cells // New Trends in Computational Vision and Bio-inspired Computing: Selected works presented at the ICCVBIC 2018, Coimbatore, India. Cham: Springer International Publishing, 2020. P. 855–862.</mixed-citation></citation-alternatives></ref><ref id="cit4"><label>4</label><citation-alternatives><mixed-citation xml:lang="ru">Shamshad F., Khan S., Zamir S.W., Khan M.H., Hayat M., Khan F.S., Fu H. Transformers in medical imaging: A survey // Medical Image Analysis. 2023. Vol. 88. P. 102802.</mixed-citation><mixed-citation xml:lang="en">Shamshad F., Khan S., Zamir S.W., Khan M.H., Hayat M., Khan F.S., Fu H. Transformers in medical imaging: A survey // Medical Image Analysis. 2023. Vol. 88. P. 102802.</mixed-citation></citation-alternatives></ref><ref id="cit5"><label>5</label><citation-alternatives><mixed-citation xml:lang="ru">Greenwald N.F., Miller G., Moen E., Kong A., Kagel A., Dougherty T., et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning // Nature Biotechnology. 2022. Vol. 40, № 4. P. 555–565.</mixed-citation><mixed-citation xml:lang="en">Greenwald N.F., Miller G., Moen E., Kong A., Kagel A., Dougherty T., et al. Whole-cell segmentation of tissue images with human-level performance using large-scale data annotation and deep learning // Nature Biotechnology. 2022. Vol. 40, № 4. P. 555–565.</mixed-citation></citation-alternatives></ref><ref id="cit6"><label>6</label><citation-alternatives><mixed-citation xml:lang="ru">Stringer C., Wang T., Michaelos M., Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation // Nature Methods. 2021. Vol. 18, № 1. P. 100–106.</mixed-citation><mixed-citation xml:lang="en">Stringer C., Wang T., Michaelos M., Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation // Nature Methods. 2021. Vol. 18, № 1. P. 100–106.</mixed-citation></citation-alternatives></ref><ref id="cit7"><label>7</label><citation-alternatives><mixed-citation xml:lang="ru">Lee M.Y., Bedia J.S., Bhate S.S., Barlow G.L., Phillips D., Fantl W.J., et al. CellSeg: a robust, pre-trained nucleus segmentation and pixel quantification software for highly multiplexed fluorescence images // BMC Bioinformatics. 2022. Vol. 23, № 1. P. 46.</mixed-citation><mixed-citation xml:lang="en">Lee M.Y., Bedia J.S., Bhate S.S., Barlow G.L., Phillips D., Fantl W.J., et al. CellSeg: a robust, pre-trained nucleus segmentation and pixel quantification software for highly multiplexed fluorescence images // BMC Bioinformatics. 2022. Vol. 23, № 1. P. 46.</mixed-citation></citation-alternatives></ref><ref id="cit8"><label>8</label><citation-alternatives><mixed-citation xml:lang="ru">Holme B., Bjørnerud B., Pedersen N.M., de la Ballina L.R., Wesche J., Haugsten E.M. Automated tracking of cell migration in phase contrast images with CellTraxx // Scientific Reports. 2023. Vol. 13, № 1. P. 22982.</mixed-citation><mixed-citation xml:lang="en">Holme B., Bjørnerud B., Pedersen N.M., de la Ballina L.R., Wesche J., Haugsten E.M. Automated tracking of cell migration in phase contrast images with CellTraxx // Scientific Reports. 2023. Vol. 13, № 1. P. 22982.</mixed-citation></citation-alternatives></ref><ref id="cit9"><label>9</label><citation-alternatives><mixed-citation xml:lang="ru">Hörst F., Rempe M., Heine L., Seibold C., Keyl J., Baldini G., et al. Cellvit: Vision transformers for precise cell segmentation and classification // Medical Image Analysis. 2024. Vol. 94. P. 103143.</mixed-citation><mixed-citation xml:lang="en">Hörst F., Rempe M., Heine L., Seibold C., Keyl J., Baldini G., et al. Cellvit: Vision transformers for precise cell segmentation and classification // Medical Image Analysis. 2024. Vol. 94. P. 103143.</mixed-citation></citation-alternatives></ref><ref id="cit10"><label>10</label><citation-alternatives><mixed-citation xml:lang="ru">Berg S., Kutra D., Kroeger T., Straehle C.N., Kausler B.X., Haubold C., et al. Ilastik: interactive machine learning for (bio) image analysis // Nature Methods. 2019. Vol. 16, № 12. P. 1226–1232.</mixed-citation><mixed-citation xml:lang="en">Berg S., Kutra D., Kroeger T., Straehle C.N., Kausler B.X., Haubold C., et al. Ilastik: interactive machine learning for (bio) image analysis // Nature Methods. 2019. Vol. 16, № 12. P. 1226–1232.</mixed-citation></citation-alternatives></ref><ref id="cit11"><label>11</label><citation-alternatives><mixed-citation xml:lang="ru">Ali R., Gooding M., Szilágyi T., Vojnovic B., Christlieb M., Brady M. Automatic segmentation of adherent biological cell boundaries and nuclei from brightfield microscopy images // Machine Vision and Applications. 2012. Vol. 23, № 4. P. 607–621.</mixed-citation><mixed-citation xml:lang="en">Ali R., Gooding M., Szilágyi T., Vojnovic B., Christlieb M., Brady M. Automatic segmentation of adherent biological cell boundaries and nuclei from brightfield microscopy images // Machine Vision and Applications. 2012. Vol. 23, № 4. P. 607–621.</mixed-citation></citation-alternatives></ref><ref id="cit12"><label>12</label><citation-alternatives><mixed-citation xml:lang="ru">Cross-Zamirski J.O., Mouchet E., Williams G., Schönlieb C.B., Turkki R., Wang Y. Label-free prediction of cell painting from brightfield images // Scientific Reports. 2022. Vol. 12, № 1. P. 10001.</mixed-citation><mixed-citation xml:lang="en">Cross-Zamirski J.O., Mouchet E., Williams G., Schönlieb C.B., Turkki R., Wang Y. Label-free prediction of cell painting from brightfield images // Scientific Reports. 2022. Vol. 12, № 1. P. 10001.</mixed-citation></citation-alternatives></ref><ref id="cit13"><label>13</label><citation-alternatives><mixed-citation xml:lang="ru">Pachitariu M., Stringer C. Cellpose 2.0: how to train your own model // Nature Methods. 2022. Vol. 19, № 12. P. 1634–1641.</mixed-citation><mixed-citation xml:lang="en">Pachitariu M., Stringer C. Cellpose 2.0: how to train your own model // Nature Methods. 2022. Vol. 19, № 12. P. 1634–1641.</mixed-citation></citation-alternatives></ref><ref id="cit14"><label>14</label><citation-alternatives><mixed-citation xml:lang="ru">Demchenko A., Belova L., Balyasin M., Kochergin-Nikitsky K., Kondrateva E., Voronina E., et al. Airway basal cells from human-induced pluripotent stem cells: a new frontier in cystic fibrosis research // Frontiers in Cell and Developmental Biology. 2024. Vol. 12. P. 1336392.</mixed-citation><mixed-citation xml:lang="en">Demchenko A., Belova L., Balyasin M., Kochergin-Nikitsky K., Kondrateva E., Voronina E., et al. Airway basal cells from human-induced pluripotent stem cells: a new frontier in cystic fibrosis research // Frontiers in Cell and Developmental Biology. 2024. Vol. 12. P. 1336392.</mixed-citation></citation-alternatives></ref><ref id="cit15"><label>15</label><citation-alternatives><mixed-citation xml:lang="ru">Stirling D.R., Swain-Bowden M.J., Lucas A.M., Carpenter A.E., Cimini B.A., Goodman A. Cell-Profiler 4: improvements in speed, utility and usability // BMC Bioinformatics. 2021. Vol. 22, № 1. P. 433.</mixed-citation><mixed-citation xml:lang="en">Stirling D.R., Swain-Bowden M.J., Lucas A.M., Carpenter A.E., Cimini B.A., Goodman A. Cell-Profiler 4: improvements in speed, utility and usability // BMC Bioinformatics. 2021. Vol. 22, № 1. P. 433.</mixed-citation></citation-alternatives></ref><ref id="cit16"><label>16</label><citation-alternatives><mixed-citation xml:lang="ru">Stirling D.R., Carpenter A.E., Cimini B.A. CellProfiler Analyst 3.0: accessible data exploration and machine learning for image analysis // Bioinformatics. 2021. Vol. 37, № 21. P. 3992–3994.</mixed-citation><mixed-citation xml:lang="en">Stirling D.R., Carpenter A.E., Cimini B.A. CellProfiler Analyst 3.0: accessible data exploration and machine learning for image analysis // Bioinformatics. 2021. Vol. 37, № 21. P. 3992–3994.</mixed-citation></citation-alternatives></ref><ref id="cit17"><label>17</label><citation-alternatives><mixed-citation xml:lang="ru">Schmidt U., Weigert M., Broaddus C., Myers G. Cell detection with star-convex polygons // International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2018. P. 265–273.</mixed-citation><mixed-citation xml:lang="en">Schmidt U., Weigert M., Broaddus C., Myers G. Cell detection with star-convex polygons // International Conference on Medical Image Computing and Computer-Assisted Intervention. Cham: Springer International Publishing, 2018. P. 265–273.</mixed-citation></citation-alternatives></ref><ref id="cit18"><label>18</label><citation-alternatives><mixed-citation xml:lang="ru">Tsai H.F., Gajda J., Sloan T.F., Rares A., Shen A.Q. Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning // SoftwareX. 2019. Vol. 9. P. 230–237.</mixed-citation><mixed-citation xml:lang="en">Tsai H.F., Gajda J., Sloan T.F., Rares A., Shen A.Q. Usiigaci: Instance-aware cell tracking in stain-free phase contrast microscopy enabled by machine learning // SoftwareX. 2019. Vol. 9. P. 230–237.</mixed-citation></citation-alternatives></ref><ref id="cit19"><label>19</label><citation-alternatives><mixed-citation xml:lang="ru">Fazeli E., Roy N.H., Follain G., Laine R.F., von Chamier L., Hänninen P.E., et al. Automated cell tracking using StarDist and TrackMate // F1000Research. 2020. Vol. 9. P. 1279.</mixed-citation><mixed-citation xml:lang="en">Fazeli E., Roy N.H., Follain G., Laine R.F., von Chamier L., Hänninen P.E., et al. Automated cell tracking using StarDist and TrackMate // F1000Research. 2020. Vol. 9. P. 1279.</mixed-citation></citation-alternatives></ref><ref id="cit20"><label>20</label><citation-alternatives><mixed-citation xml:lang="ru">Moen E., Borba E., Miller G., Schwartz M., Bannon D., Koe N., et al. Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning // bioRxiv. 2019. 803205.</mixed-citation><mixed-citation xml:lang="en">Moen E., Borba E., Miller G., Schwartz M., Bannon D., Koe N., et al. Accurate cell tracking and lineage construction in live-cell imaging experiments with deep learning // bioRxiv. 2019. 803205.</mixed-citation></citation-alternatives></ref></ref-list><fn-group><fn fn-type="conflict"><p>The authors declare that there are no conflicts of interest present.</p></fn></fn-group></back></article>
