Parallel and Distributed Computing Laboratory
Dept. of Computer Science & Information Engineering
Wei-Jen Wang (Associate Prof.,
The members of the Parallel and Distributed Computing Laboratory (PaDiC Lab) focus on the research issues of parallel and distributed computing, in particular distributed programming and cloud/grid computing. We are also studying and working on the related issues of Internet applications, such as information hiding, system security, e-science, and e-learning. Our research interests include but are not limited to the following topics:
Distributed Programming Technology
Industrial IoT Systems
Fault Tolerance/High Availability for Virtual Machines
Cloud Computing Environment:
With the knowledge and experience of parallel/distributed computing technologies, we have developed a cloud platform, namely SAMEVED, to provision on-demand computing resources in different granularity, from a single virtual machine to a virtual elastic datacenter. That is, the SAMEVED is able to construct and to manage a set of virtual machines on a private, user-defined network topology. The SAMEVED currently hosts a software service (CSEP) that provides tens of on-demand cloud/network security experiments, such as DDoS and SQL injection. We also used SAMEVED to build a prototype of a multi-cloud system and investigated possible automatic computing strategies on such an environment. In this study, we used the mobile-agent technology to model cloud applications, and designed and implemented a system that enables automatic intelligent migration of cloud applications and self provisioning of cloud resources. In order to improve the effectiveness of resource allocation on a large-scale distributed environment, we exploit runtime estimation and several fast scheduling strategies for near-optimal resource allocation, which results in high resource utilization rate and low execution time in the private cloud.
Fault Tolerance for Virtualization Technology:
We are working on a mechanism that supports fault tolerance and high availability on KVM over industrial personal computers. In the past year, we have widely studied the technology of fault-tolerant virtual machine, and developed a KVM-based high availability system called NCU HA. We are working on a project named NCU FTVM, which aims to provide fault tolerance for KVM to enable continuous execution upon failures.
Performance Analysis of Industrial Internet of Things Systems:
We are working on a project that aims to evaluate the performance of a DDS-based IIoT system, and use the measured values to build a emulation/simulation platform.
Large-Scale Scientific Computing Application, System, and Architecture:
Many computational science applications, due to their inherent nature and complexity, are data-intensive or compute-intensive, and thus need long execution time. As a result, many parallel/distributed computing techniques have been used to develop domain-specific tools to help science research. In the past five years, we had developed some general-purpose tools for concurrent data analysis, and conducted several collaborative projects with the experts of astronomy and neuroscience.
In the domain of astronomy, we have developed a distributed agglomerative-hierarchical-clustering algorithm that calculates the similarity of approximately six hundred thousand main-belt asteroids and forms a hierarchical tree accordingly. The experimental results have demonstrated its excellent scalability. We also developed a new distributed data structure and algorithm to find a particular cluster in a short time, given an asteroid and a similarity threshold. We have also built several distributed systems to support large data processing, storage and inquiry for different astronomical applications.
Recently, we have initiated
research collaboration with neuroscientists, where we help investigate a
software approach to improve the efficiency of data analysis. We further
developed a computing tool that
parallelizes the software approach by a general-purpose GPU. For real experimental data, the speedup factor is about 7 on average, depending on the model complexity and the data.
The information can be obtained at Wei-Jen Wang's Publication List.