Award Date

5-1-2022

Degree Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Electrical and Computer Engineering

First Committee Member

Mei Yang

Second Committee Member

Yingtao Jiang

Third Committee Member

Emma Regentova

Fourth Committee Member

Mo Weng

Number of Pages

118

Abstract

Hyper-dimensional images are informative and become increasingly common in biomedical research. However, the machine learning methods of studying and processing the hyper-dimensional images are underdeveloped. Most of the methods only model the mapping functions between input and output by focusing on the spatial relationship, whereas neglect the temporal and causal relationships. In many cases, the spatial, temporal, and causal relationships are correlated and become a relationship complex. Therefore, only modeling the spatial relationship may result in inaccurate mapping function modeling and lead to undesired output. Despite the importance, there are multiple challenges on modeling the relationship complex, including the model complexity and the data availability. The objective of this dissertation is to comprehensively study the mapping function modeling of the spatial-temporal and the spatial-temporal-causal relationship in hyper-dimensional data with deep learning approaches. The modeling methods are expected to accurately capture the complex relationships in class-level and object-level so that new image processing tools can be developed based on the methods to study the relationships between targets in hyper-dimensional data. In this dissertation, four different cases of relationship complex are studied, including the class-level spatial-temporal-causal relationship and spatial-temporal relationship modeling, and the object-level spatial-temporal-causal relationship and spatial-temporal relationship modeling. The modelings are achieved by deep learning networks that implicitly model the mapping functions with network weight matrix. For spatial-temporal relationship, because the cause factor information is unavailable, discriminative modeling that only relies on available information is studied. For class-level and object-level spatial-temporal-causal relationship, generative modeling is studied with a new deep learning network and three new tools proposed. For spatial-temporal relationship modeling, a state-of-the-art segmentation network has been found to be the best performer over 18 networks. Based on accurate segmentation, we study the object-level temporal dynamics and interactions through dynamics tracking. The multi-object portion tracking (MOPT) method allows object tracking in subcellular level and identifies object events, including object born, dead, split, and fusion. The tracking results is 2.96% higher on consistent tracking accuracy and 35.48% higher on event identification accuracy, compared with the existing state-of-the-art tracking methods. For spatial-temporal-causal relationship modeling, the proposed four-dimensional reslicing generative adversarial network (4DR-GAN) captures the complex relationships between the input and the target proteins. The experimental results on four groups of proteins demonstrate the efficacy of 4DR-GAN compared with the widely used Pix2Pix network. On protein localization prediction (PLP), the predicted localization from 4DR-GAN is more accurate in subcellular localization, temporal consistency, and dynamics. Based on efficient PLP, the digital activation (DA) and digital inactivation (DI) tools allow precise spatial and temporal control on global and local localization manipulation. They allow researchers to study the protein functions and causal relationships by observing the digital manipulation and PLP output response.

Keywords

bioinformatics; biological image processing; deep learning; hyper-dimensional; machine learning; object tracking

Disciplines

Communication | Computer Engineering | Electrical and Computer Engineering

File Format

pdf

File Size

10300 KB

Degree Grantor

University of Nevada, Las Vegas

Language

English

Rights

IN COPYRIGHT. For more information about this rights statement, please visit http://rightsstatements.org/vocab/InC/1.0/


Share

COinS