# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2022-TTU-1 | Renewable Energy Powered Data Center Control | Alan Sill |
Availability of large quantities of renewable energy can break the cost curve for large scale computing. For most
high performance computing centers, energy is a significant fraction of operating cost, and can reach up to approximately
the same cost of computing clusters over the lifetime of the equipment. Wind power and solar energy are increasingly
available in the US and worldwide, but unlike previous sources of renewable energy such as hydroelectric plants,
each of these has significantly variable availability and cost (sometimes even negative) throughout the day.
To make best use of these sources of energy, data centers will need to be highly automated and preferably sited
in remote locations near these sources to reduce transmission costs. This project will apply previous CAC work in
data center automation, analytics, and control standards and methods to this problem.
|
|||
2 | 2022-TTU-2 | Harnessing In-Network Compute for HPC System Architectures | Yong Chen |
In recent years, a new class of accelerator devices known as data processing units (DPUs) have been designed to
accelerate data-intensive workloads. Smart network interface controllers, or SmartNICs, such as the
Nvidia Bluefield and Broadcom Stingray series of devices, represent a particularly promising incarnation of DPUs.
However, the capabilities and potential use cases of SmartNICs remain largely unknown. Moreover, a unified interface
for utilizing SmartNICs remains unrealized. The objectives of this proposed project are three-fold. First, we will
investigate the capabilities of SmartNIC accelerators, as well as the set of potentially applicable SmartNIC use cases,
in order to better understand the SmartNIC functionalities desired by end-users and system architects. Second, using
these insights, and consistent with the goals of the OpenSNAPI project, we will then design a unified API for
SmartNIC programming. Third, we will work to provide a standard benchmarking methodology for SmartNIC accelerators
and SmartNIC-enabled systems that can be incorporated into the OpenHPCA project.
|
|||
3 | 2022-UA-1 | Non-Fungible Token (NFT) based Digital Rights Management (NFT-DRM) | Pratik Salam |
The growth on Blockchain and Non-Fungible Tokens has revolutionized online marketplace with blockchain
technology being used from cryptocurrency, smart contracts, user privacy maintenance, and data management,
providing decentralization, immutability and crypto-graphic link. A non-fungible token (NFT) is a unique and
non-interchangeable unit of data stored on a digital ledger (blockchain). NFTs can be associated with easily
reproducible items such as photos, videos, audio, and other types of digital files as unique items (analogous
to a certificate of authenticity) and use blockchain technology to give the NFT a verified and public
proof of ownership. Thus, NFTs have found applications in Digital Rights Management (DRM) giving rise to
a wide array of problems including, NFT authenticity, NFT Owner Identity Verification, NFT Copyright Authenticity,
and NFT Content Tracking. In this research, we propose Non-Fungible Token (NFT) based Digital Rights Management
(NFT-DRM), a framework to address NFT authenticity, NFT Owner Identity Verification, NFT Copyright Authenticity,
and NFT Content Tracking issues of NFT.
|
|||
4 | 2020-UA-2 | Inter-Architecture Portability of Deep Neural Network and Side Channel Attack | Gregory Ditzler |
Side-channel attacks (SCA) have been widely studied over the past two decades, which resulted in numerous techniques
that use statistical models to extract systems information from various side channels. More recently, machine learning
has shown significant promise to advance the ability for SCAs to expose vulnerabilities in complex systems. Neural
networks can effectively learn non-linear relationships between many features within a side channel, which was limited
by shallow machine learning algorithms such as SVMs and Random Forests. In this paper, we propose a deep neural network
(DNN) approach and multi-architecture data aggregation technique to profile power traces for a system with an embedded processor.
|
|||
5 | 2020-UA-3 | 5G based Federated Testbed for Cybersecurity | Salim Hariri |
There is an increasing dependency of individual users, as well as businesses, on secure computers and computer networks. Furthermore, there is also an increase in the threats to them by malicious agents. Thus, there is an urgent need
for accurate, adaptive and automatic detection of cyber-attacks and intrusions has emerged. The ever-increasing need for the adaptive, autonomic detection of and protection against attacks/intrusions is evident. In this project we
propose develop a modular and adaptive cyber immunity mechanism to overcome security deficiencies of current computing systems.
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2021-TTU-1 | Renewable Energy Powered Data Center Control | Alan Sill |
Availability of large quantities of renewable energy can break the cost curve for large scale computing. For most
high performance computing centers, energy is a significant fraction of operating cost, and can reach up to approximately
the same cost of computing clusters over the lifetime of the equipment. Wind power and solar energy are increasingly
available in the US and worldwide, but unlike previous sources of renewable energy such as hydroelectric plants,
each of these has significantly variable availability and cost (sometimes even negative) throughout the day.
To make best use of these sources of energy, data centers will need to be highly automated and preferably sited
in remote locations near these sources to reduce transmission costs. This project will apply previous CAC work in
data center automation, analytics, and control standards and methods to this problem.
|
|||
2 | 2021-TTU-2 | Project Title: Integrated Data Collection and Visualization Framework for Data Centers based on Telemetry Model | Tommy Dang |
Understanding the status of high-performance computing platforms and correlating applications to resource usage
provide insight into the interactions among platform components. In this project, we will continue from the
prior 2020-TTU-2 project and continue working on developing cutting-edge visualization, monitoring, and management
solutions for HPC systems, including data collection, visual analytics, and management.
|
|||
3 | 2019-TTU-3 | xBGAS (Extended Base Global Address Space) Architecture Optimizations and Applications | Yong Chen |
The xBGAS project is a collaboration between Tactical Computing Labs (TCL), Texas Tech University (TTU),
Arizona State University (ASU), and the University of Cambridge (UC). The goals of the Extended Base Global
Address Space (xBGAS) project are three-fold: 1) to provide 128-bit extended addressing capabilities based
on the RISC-V architecture without modifying ABI compatibility (e.g. RV64 apps will execute without an issue),
2) extended addressing must be flexible to support multiple target application spaces/system architectures,
e.g. traditional data centers, clouds, HPC, etc., and 3) extended addressing must not rely upon any one
virtual memory mechanism. The overall xBGAS project includes five major tasks: design and development of the
ISA extension, design and development of compilers and toolchains to support the xBGAS extended instructions
and addressing modes, design and development of runtime libraries, design and development of applications
built upon xBGAS, and porting benchmarks.
|
|||
4 | 2021-UA-1 | Artificial Intelligence (AI) based Reputation Management Service (AI-RMS) | Ali Akoglu |
Social media users share their experience, feelings, and opinions on Web based social media platforms that can influence
others’ opinions regarding a person or a brand and their product. This influence is so strong that insurance companies
offer policies against a person’s or a company’s reputation begin damaged due to online attacks. Thus, there is a need
for a scalable framework to quantify the impacts of different social media posts on reputation of companies, organizations,
celebrities, etc. In this research, we propose to develop a framework to build a reputation integrity score for entities,
based on the communication on social media about this person.
|
|||
5 | 2021-UA-2 | AI-Driven Crises Management Modeling, Analysis, and Prediction | Salim Hariri, Chongke Wu |
With the advances in Internet technologies and services, social media has been gaining excessive popularity,
especially because these technologies provide anonymity, which harbors spam bots, hacker discussion forums,
rogue players, misinformation that can be maliciously manipulated to negative impacts the reputation of
targeted business, and/or exploit mistakes or crisis events. In this project, we will leverage the
NSF CAC research capabilities that utilize AI, real-time sentimental analysis, autonomic computing,
machine learning (i.e., ensemble learning, deep learning, clustering) to identify fake news and the
identity of bad actors and social bots who are maliciously aiming at destroying the reputation of targeted businesses.
In this research, we apply autonomic social media monitoring events and news, perform novel text feature extraction
methods, sentimental analysis to detect and identify the spam bots, source of fake news and rouge players, then
uses Legendary experience to develop effective defense and offensive strategies to manage the detected crisis
and minimizes its impact. This project will leverage our ongoing project in detecting anomalous behavior,
detecting the identity and the authors of malicious messages, and our AI recommendation system.
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2020-TTU-1 | Data Center Control for Renewable Energy Applications | Alan Sill |
Availability of large quantities of renewable energy can break the cost curve for large scale computing. For most high performance computing centers, energy is a significant fraction of operating cost, and can reach up to approximately
the same cost of com puting clusters over the lifetime of the equipment. Wind power and solar energy are increasingly available in the US and worldwide, but unlike previous sources of renewable energy such as hydroelectric plants, each
of these has significantly variable availability and cost (sometimes even negative) throughout the day. To make best use of these sources of energy, data centers will need to be highly automated and preferably sited in remote locations
near these sources to reduce transmission costs. This project will apply previous CAC work in data center automation, analytics, and control standards and methods to this problem.
|
|||
2 | 2020-TXSTATE-1 | Data Center Analytics and Control | Ziliang Zong |
Datacenter-level disaster happens but is rarely studied. Since the consequences of datacenter-level failure could be catastrophic and traditional fault-tolerance techniques will not work, it is critical to understand how to mitigate
such disaster without affecting user experiences. The goal of this project is to explore datacenter-level disaster recovery techniques with minimal user impact.
|
|||
3 | 2020-TXSTATE-2 | Data Movement Optimization of RLWE-based Homomorphic Encryption on Coherent Accelerators | Lisa Gittner |
This project aims to develop accelerated solutions of Somewhat Homomorphic Encryption (SHE) based on the Ring Learning with Errors scheme (RLWE). Recent research has identified data movement between host and device as a critical
performance bottleneck in RLWE-based SHE. Our work will leverage the recently introduced Unified Memory technology and the Coherent Accelerator Processor Interface (CAPI) to develop an algorithm for the organization and placement of
data within a Hybrid Memory System (HMS). The algorithm will reduce the amount of data movement between the host and the device and overlap remote fetches with computation on the accelerator to minimize exposed latency. The decision of
when to move data from host to device will be guided by a Machine Learning model. A semi-supervised classifier will be trained offline to learn the relationship between SHE computation and data access characteristics and placement
configurations. The optimal configuration will then be selected at runtime via autotuning methods. In prior work, we have deployed ML-driven heuristics to optimize accelerated applications where it has yielded integer factor performance
improvements.
|
|||
4 | 2020-UA-1 | Parallel Machine Learning Models (PMLM) for Securing IoT infrastructure | Salim Hariri, Ali Akoglu, Gregory Ditzler |
The advent of ‘smart’ infrastructure systems that integrate digital communications and controls, with human operators as beneficiaries have created more new vulnerabilities than would exist if the sub-systems were isolated from one
another. Sophisticated cyberattacks can exploit these vulnerabilities to disrupt or even completely disable the operations of our critical infrastructures and their services. The recent embrace of Internet of Things (IoT), autonomous
driving, and cloud computing will further exacerbate the cybersecurity problem. Anomaly based Intrusion Detection Systems (ABA-IDS) are the go-to approach to detect and secure this smart infrastructure. ABA-IDS heavily depend on
accurate modeling of the normal behavior of the systems to detect attacks. This modeling of the normal behavior is performed using machine learning techniques. To address IoT security problem not only do we need to train models that are
highly accurate in anomaly detection but also train models and use machine learning techniques that are highly scalable in order to detect abnormal behavior in large cluster of IoT devices. The main goal of this research project is to
explore the use of GPUs and other parallel processing techniques to improve the performance (computational) of machine learning models to aid in secure IoT infrastructure.
|
|||
5 | 2020-UA-2 | Tactical Cyber Immune System | Gregory Ditzler, Salim Hariri |
There is an increasing dependency of individual users, as well as businesses, on secure computers and computer networks. Furthermore, there is also an increase in the threats to them by malicious agents. Thus, there is an urgent need
for accurate, adaptive and automatic detection of cyber-attacks and intrusions has emerged. The ever-increasing need for the adaptive, autonomic detection of and protection against attacks/intrusions is evident. In this project we
propose develop a modular and adaptive cyber immunity mechanism to overcome security deficiencies of current computing systems.
|
|||
6 | 2020-UNT-1 | Declustered RAID for HPC Storage Systems and UNT Affiliated Site Updates | Song Fu |
Disk arrays spread data across several disks and access them in parallel to increase data transfer rates and I/O rates. Disk arrays are, however, highly vulnerable to disk failures. One of the challenges facing RAID storage technology
is the growing time needed to rebuild failed disks, which increases the risk of data loss and threatens the long-running data storage technology's viability. When a disk fails, one RAID controller and a handful of disks do all the
recovery work while the other disks and RAID controllers are not involved in the recovery process. The time it takes to rebuild an 8 TB or larger disk drive can reach several days, depending on how busy the storage system and RAID group
are. In this project, we will explore declustered RAID, machine learning enabled proactive disk data protection, and vectorized RAIDZ technologies on both HDD and SSD drives to develop always-on HPC storage systems.
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2019-TTU-1 | Visualizing, Monitoring, and Predicting Health Status of HPC Centers | Tommy Dang |
Monitoring data centers is challenging due to their size, complexity, and dynamic nature. This project proposes visual approaches for situational awareness and health monitoring of HPC systems. The visualization requirements are
expanded on the following dimensions: HPC spatial layout, temporal domain (historical vs. real-time tracking), jobs scheduling data, and system health services such as temperature, fan speed, and power consumption. We therefore focus
the following design goals: 1) Provides spatial and temporal overview across hosts and racks, 2) Allows system administrators to filter by time series features such as sudden changes in temperatures for system debugging,3) Inspects the
correlation of system health services and job scheduling information in a single view and 4) Characterize the HPC systems using unsupervised learning and provide predictive analysis.
|
|||
2 | 2019-TTU-2 | Extended Base Global Address Space for Data Center Scale Addressing | Yong Chen |
With the collaboration between Tactical Computing Labs(TCL), Texas Tech Univ. (TTU), and Boston Univ.(BU),the goals of the Extended Base Global Address Space (xBGAS) project are three-fold:1)toprovide128-bitextended addressing
capabilities based on RISC-V architecture with ABI compatibility(e.g.RV64 apps will execute without an issue),2)extended addressing must be flexible to support multiple target application spaces/system architectures, e.g. traditional
data centers, clouds, HPC, etc.,and3)extended addressing must not rely upon any one virtual memory mechanism. The overall xBGAS project includes four major tasks: design and development of ISA extension, design and development of
compiler and tool chain to support extended instructions and addressing modes, design and development of runtime libraries, and porting benchmarks and applications
|
|||
3 | 2019-TTU-3 | Predictive Models for Mental Health Diversion from Jails | Lisa Gittner |
Data recovery and salvage of all county booking, court proceedings, probation and mental health records. Data is being recovered from a 1989 unstable system. We will compile data and return 30-years of recovered data.
|
|||
4 | 2019-UA-1 | Artificial and Emotional Intelligence in Health-Care | Salim Hariri |
The primary goal of this project is to design and develop the capability of an intelligent assistant that can be used primarily in the health care field. We are going to utilize different technologies, such as natural language
processing (NLP), scenario representation, AI-based chat capability, and detection of cognitive state. The intelligent assistant can be trained to perform in different roles including a personal assistant of a patient, a nurse, or it
can even play the role of a patient in a training environment for doctors and nurses.
|
|||
5 | 2019-UA-2 | Seniors Virtual Assistant (SeVA) | Salim Hariri, Chongke Wu |
Delirium affects 60% of hospitalized dementia patients, impacting long-term survival and quality of life. SeVA (Senior’s Virtual Assistant) aims to address the potential gaps of the current hospital practices in early delirium
detection, management, and prevention through continuous monitoring of clinical, mental, and emotional factors. Thus, SeVA provides timely non-pharmacological intervention for patients with Alzheimer’s disease and related dementias.
|
|||
6 | 2019-UA-3 | MLABA Multi-Layer Anomaly Behavior Analysis | Salim Hariri, Sicong Chao, Pratik Satam |
Cyberspace includes a wide range of physical networks, storage and computing devices, applications, and users with different roles and requirements. Securing and protecting such complex and dynamic cyberspace resources and services are
grand challenges. MLABA(Multi-Layer Anomaly Behavior Analysis) aims at developing a multi-layer anomaly behavior analysis of all components associated with each cyberspace layer and how they interact with each other in order to achieve
superior capabilities in characterizing their normal operations and proactively detect any anomalous behavior that might be triggered by malicious attacks. MLABA uses unsupervised deep learning to collect the feature sets of Physical
Persona Footprint (PPF), Logical Persona Footprint (LPF), and User Persona Footprint (UPF), to classify normal and abnormal behaviors.
|
|||
7 | 2019-UA-4 | Twitter Author Identification Using Unsupervised Learning | Salim Hariri, Cihan Tunc |
With the advances in the Internet technologies and services, the social media has been gaining excessive popularity and these technologies provide anonymity which harbors hacker discussion forums, underground markets, dark web, etc. We
propose to use unsupervised author identification techniques for Twitter to tackle social media forensics cases when suspects of authorship are either missing or not reliable. We will develop the tools to collect potential malicious
twitter user data and extract unique signatures that can be used to model the characteristics of Twitter users and design unsupervised learning-based method to identify suspects in Twitter.
|
|||
8 | 2019-UA-5 | Bluetooth Security for Smart Infrastructures | Salim Hariri |
With the rapid deployment of IoT devices, Personal Area Networks (PAN)such as Bluetooth networks have become the wireless network choice for small range communications. It is important that Bluetooth networks are secure against
cyberattacks like Denial of Service (DoS), Man-in-the Middle attack, battery draining attacks, etc. In this project we will develop an anomaly-based intrusion detection system for Bluetooth networks (BIDS). The BIDS will use an n-gram
approach to characterize the normal behavior of the Bluetooth protocol. This project will help in detecting malicious attacks like power utilization attack, DoS, man-in-the-middle attack, etc. on the Bluetooth network.
|
|||
9 | 2019-UA-6 | FCTaaS- Federated Cyber Security Testbed as a Service | Salim Hariri |
The advent of ‘smart’ infrastructure systems has created new. Sophisticated cyberattacks can exploit these vulnerabilities to disrupt or even completely disable the operations of our critical infrastructures and their services. There
are many possible testbeds (physical and virtual) and simulations for critical infrastructures and cyber systems. To understand the interdependency among these testbeds and their implications on cybersecurity issues, it is important to
be able to compose several testbeds into one federated testbed that includes smart devices and sensors, IoT devices, cloud systems, smart grids, smart buildings, etc. (ultimately what is known as smart cities or smart governments). The
main goal of this project is to explore innovative techniques to allow seamless composition of a federated testbed that consists of several heterogeneous testbeds include virtual cybersecurity testbeds, IoT testbeds, and cyber-physical
testbeds.
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2018-TTU-1 | Cloud Standards Testing & Testbeds | Alan Sill |
The focus of this work is on creation of testbeds that can allow direct operation and comparison of a variety of cloud software. This work is being conducted in cooperation with the Cloud Plugfest developer testing series, US National
Institute of Standards and Technology (NIST) cloud computing efforts, and related cloud software testing programs. This testing is used to discover, itemize and understand several specific aspects of the cloud computing landscape to
identify approaches of business value to CAC members. This project can be expanded to include other developing standards testing as needed.
|
|||
2 | 2018-TTU-2 | Data Center Analytics and Control | Yong Chen |
The focus of this work is on creation of testbeds that can allow direct operation and comparison of a variety of cloud software. This work is being conducted in cooperation with the Cloud Plugfest developer testing series, US National
Institute of Standards and Technology (NIST) cloud computing efforts, and related cloud software testing programs. This testing is used to discover, itemize and understand several specific aspects of the cloud computing landscape to
identify approaches of business value to CAC members. This project can be expanded to include other developing standards testing as needed.
|
|||
3 | 2018-TTU-3 | Data Integration Analytics and Risk Models for Mental Health Diversion in Jails | Lisa Gittner |
|
|||
4 | 2018-TTU-4 | Virtual Secure Sandbox for Big Data Analytics | Susan Mengel |
|
|||
5 | 2018-UA-1 | Twitter Author Identification Using Unsupervised Learning | Sicong Shao |
|
|||
6 | 2018-UA-2 | MLABA Multi-Layer Anomaly Behavior Analysis | Samuel Hess |
|
|||
7 | 2018-UA-3 | Intelligent Cyber Security Assistant | Carla Sayan |
|
|||
8 | 2018-UA-4 | Machine Learning Based Fingerprinting of Wireless Modulation Schemes | Salim Hariri, Pratik Satam |
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2017-TTU-1 | Cloud Standards Testing & Testbeds | Alan Sill |
The focus of this work is on creation of testbeds that can allow direct operation and comparison of a variety of cloud software.This work is being conducted in cooperation with the Cloud Plugfest developer testing series, US National
Institute of Standards and Technology (NIST) cloud computing efforts, and related cloud software testing programs. This testing is used to discover, itemize and understand several specific aspects of the cloud computing landscape to
identify approaches of business value to CAC members. This project can be expanded to include other developing standards testing as needed.
|
|||
2 | 2017-TTU-2 | Data Center Analytics and Control | Yong Chen |
The focus of this work is on creation of testbeds that can allow direct operation and comparison of a variety of cloud software. This work is being conducted in cooperation with the Cloud Plugfest developer testing series, US National
Institute of Standards and Technology (NIST) cloud computing efforts, and related cloud software testing programs. This testing is used to discover, itemize and understand several specific aspects of the cloud computing landscape to
identify approaches of business value to CAC members. This project can be expanded to include other developing standards testing as needed.
|
|||
3 | 2017-TTU-3 | DuraStore - Achieving Highly Durable Data Centers | Yong Chen |
This project concentrates on significantly enhancing data durability in data centers without using remote-site data replication (geo-replication) like in existing solutions. It also considers other important characteristics of the
storage system including load balance and scalability holistically. One of the major research challenges is that some of the existing techniques of improving the correlated-failure durability sacrifices load balance, scalability, and
numerous other features of the existing storage systems. This project will investigate these issues and create innovative solutions.
|
|||
4 | 2017-TTU-4 | Virtual Secure Sandbox for Big Data Analytics | Susan Mengel |
Big data often involves disparate sources of data from unrelated files that are available over a period of many years. The presumably unrelated data, however, is difficult to organize and use due to its size and dissimilar attributes
having different factor nomenclature and/or file formats. The main purpose of this project is to enable technologies to organize, visualize, and analyze large repositories of potentially unrelated files across several formats. The
products from this work will be integrated into precision risk Big Data analytics through the center.
|
|||
5 | 2017-TTU-5 | Risk Models using Big Data | Lisa Gittner |
The proposed project refines risk algorithms developed for handling disparate sources of data. This effort extends project 2016-TTU-5
|
|||
6 | 2017-MSU-1 | A Performance Benchmark for next-generation Intrusion Detection Systems | Sherif Abdelwahed |
A common trend in Intrusion Detection Systems (IDSs) is to consider data structures based on graphs to analyze network traffic and attack patterns. Timely detecting a threat is fundamental to reduce the risk to which the system is
exposed, but no current studies aim at providing useful information to size Cloud or HPC infrastructures to meet certain service level objectives. This project targets to prototype a completely distributed property-graph based benchmark
for next generation IDSs.
|
|||
7 | 2017-UA-1 | Resilient Cloud Services | Cihan Tunc |
The current cloud security techniques are mainly labor intensive, use signature based attack detection tools, and not flexible enough to handle the current cyberspace complexity, dynamism, and epidemic-style propagation of attacks.
Furthermore, while the organization boundaries are gradually disappearing, it became infeasible to create a defendable perimeter. In addition, the insider attacks are still a major issue since they have access to the cloud services and
even infrastructures. In this project, scalable and resilient cloud services will be prototyped and tested using a public cloud service such as Amazon AWS.
|
|||
8 | 2017-UA-2 | Physics Aware Programming for 3D Heart Simulations | Salim Hariri |
The main purpose of this project is to prototype an HCI environment for Chronic Heart Failure (CHF) that guides clinical experiments, and improves care delivery and patient management.
|
|||
9 | 2017-UA-3 | IoT Security Framework | Salim Hariri |
The main goal of this project is to design and develop a IoT Security Framework to enable developers to predict and mitigate security issues in a systematically. The project evaluates a security framework to deploy highly secure IoT
applications for Smart Infrastructures. Our approach is based on an Anomaly Behavior Analysis methodology to detect
any type of attack (known or unknown).
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2016-TTU-1 | Cloud Benchmarking & Performance | Yong Chen |
Benchmarking of a system is a process of assessing its performance and other characteristics, to be able to compare them with other systems. There is a strong desire for benchmarking and evaluating Cloud computing systems. Cloud
computing provides service-oriented access to computing, storage and networking resource. On one hand, considering the number of Cloud computing providers and the different services each provider offers, Cloud users need benchmark
information to select the best service and provider for their needs. On the other hand, Cloud providers and Cloud architects need benchmarking results to create optimized architectures. The problem is that Cloud is a very complex and
dynamic environment. Therefore there are important differences between benchmarking in dynamic Cloud environment with traditional benchmarking methods in static systems. We have performed an initial research about benchmarking in Cloud
and understand various challenges about creating and deploying Cloud benchmarking; and we found that the most important problem for Cloud users are data isolation, security and unreliable performance. Inference and performance isolation
are other major challenges for developers and architects of Cloud systems.
|
|||
2 | 2016-TTU-2 | Unistore | Yong Chen |
Emerging large-scale applications on Cloud computing platform, such as information retrieval, data mining, online business, and social network, are data- rather than computation-intensive. Storage system is one of the most critical
components for Cloud computing. The traditional hard disk drives (HDD) are current dominant storage devices in Clouds, but are notorious for long access latency and failure prone. The emerging storage class memory (SCM) such as Solid
State Drives provides a new promising storage solution of high bandwidth, low latency, and mechanical component free, but with inherent limitations of small capacity, short lifetime, and high cost. The objective of this project is to
build an innovative unified storage architecture (Unistore) with the co-existence and efficient integration of heterogeneous HDD and SCM devices for Cloud storage systems.
|
|||
3 | 2016-TTU-3 | Financial Intelligence for Banks | Zhangxi Lin, Jerry Perez |
Financial Intelligence for Banks is a new project of the TTU Cloud Computing efforts. The focus of this work is on creation of software and testing methodologies that can allow banks the ability to predict short term market analysis for
customer portfolios through stress testing, loan product pricing, and target marketing for current and potential customers. This work is being conducted in cooperation with CAC partner site Happy State Bank in Amarillo, Texas. Dr. Perez
has been project leader of this working group for the CAC since July 2015. Center for Advanced Analytics and Business Intelligence (CAABI) is a major participating institution at TTU based its existing research portfolio.
|
|||
4 | 2016-TTU-4 | Cloud Standards, Security and Interoperability Testbeds | Alan Sill |
|
|||
5 | 2016-TTU-5 | Precision Risk Big Data Analytics for Population Health | Ravi Vadapalli |
This project combines individual and public health exposome data to develop and prototype refined disease risk analytics using multi-level models. These models will then be validated using obesity and CVD exemplar datasets, as well as
Patient Health Information (PHI).
|
|||
6 | 2016-UA-1 | VIMP: Vehicle Information and Management Portal | Pratik Satam |
|
|||
7 | 2016-UA-2 | Security Framework for Supply Chain Management Services | Salim Hariri |
|
|||
8 | 2016-UA-3 | Online Big Data Analytics for Cyber Systems | Greg Ditzler |
|
|||
9 | 2016-UA-4 | Resilient Cloud Services | Cihan Tunc |
The current cloud security techniques are mainly labor intensive, use signature based attack detection tools, and not flexible enough to handle the current cyberspace complexity, dynamism, and epidemic-style propagation of attacks.
Furthermore, while the organization boundaries are gradually disappearing, it became infeasible to create a defendable perimeter. In addition, the insider attacks are still a major issue since they have access to the cloud services and
even infrastructures. In this project, scalable and resilient cloud services will be prototyped and tested using a public cloud service such as Amazon AWS."
|
|||
10 | 2016-UA-5 | Anomaly Behavior Analysis for IoT Sensors | Jesus Pacheco |
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2015-TTU-1 | Cloud Benchmarking & Performance | Yong Chen |
Benchmarking of a system is a process of assessing its performance and other characteristics, to be able to compare them with other systems. There is a strong desire for benchmarking and evaluating Cloud computing systems. Cloud
computing provides service-oriented access to computing, storage and networking resource. On one hand, considering the number of Cloud computing providers and the different services each provider offers, Cloud users need benchmark
information to select the best service and provider for their needs. On the other hand, Cloud providers and Cloud architects need benchmarking results to create optimized architectures. The problem is that Cloud is a very complex and
dynamic environment. Therefore there are important differences between benchmarking in dynamic Cloud environment with traditional benchmarking methods in static systems. We have performed an initial research about benchmarking in Cloud
and understand various challenges about creating and deploying Cloud benchmarking; and we found that the most important problem for Cloud users are data isolation, security and unreliable performance. Inference and performance isolation
are other major challenges for developers and architects of Cloud systems.
|
|||
2 | 2015-TTU-2 | Unistore | Yong Chen |
Emerging large-scale applications on Cloud computing platform, such as information retrieval, data mining, online business, and social network, are data- rather than computation-intensive. Storage system is one of the most critical
components for Cloud computing. The traditional hard disk drives (HDD) are current dominant storage devices in Clouds, but are notorious for long access latency and failure prone. The emerging storage class memory (SCM) such as Solid
State Drives provides a new promising storage solution of high bandwidth, low latency, and mechanical component free, but with inherent limitations of small capacity, short lifetime, and high cost. The objective of this project is to
build an innovative unified storage architecture (Unistore) with the co-existence and efficient integration of heterogeneous HDD and SCM devices for Cloud storage systems.
|
|||
3 | 2015-TTU-3 | Risk and Financial Analytics in Population Health | Ravi Vadapalli |
The main purpose of this project is to prototype readmission risk and insurance contract analytics This project involves deploying a sophisticated data warehouse for hosting, managing and processing large volumes of healthcare data from
disparate sources. The secondary objective is to prototype-advanced analytics that helps in risk stratification and assessing risk category transition in population health modeling. The tertiary objective is to model insurance contracts
and their business value.
|
|||
4 | 2015-TTU-4 | Cloud Standards, Security and Interoperability Testbeds | Alan Sill |
|
|||
5 | 2015-TTU-5 | Big Data Financial Analytics | Zhangxi Lin |
|
|||
6 | 2015-MSU-1 | VIMP: Vehicle Information and Management Portal | Srishti Shrivastava |
|
|||
7 | 2015-MSU-2 | Model Based Automated Security Management of Distributed Systems | Sheriff Abdelwahed |
|
# | Project ID | Project Description | Led By |
---|---|---|---|
1 | 2014-TTU-1 | Cloud Benchmarking & Performance | Soheil Mazaherin |
Benchmarking of a system is a process of assessing its performance and other characteristics, to be able to compare them with other systems. There is a strong desire for benchmarking and evaluating Cloud computing systems. Cloud
computing provides service-oriented access to computing, storage and networking resource. On one hand, considering the number of Cloud computing providers and the different services each provider offers, Cloud users need benchmark
information to select the best service and provider for their needs. On the other hand, Cloud providers and Cloud architects need benchmarking results to create optimized architectures. The problem is that Cloud is a very complex and
dynamic environment. Therefore there are important differences between benchmarking in dynamic Cloud environment with traditional benchmarking methods in static systems. We have performed an initial research about benchmarking in Cloud
and understand various challenges about creating and deploying Cloud benchmarking; and we found that the most important problem for Cloud users are data isolation, security and unreliable performance. Inference and performance isolation
are other major challenges for developers and architects of Cloud systems.
|
|||
2 | 2014-TTU-2 | Unistore | Yong Chen |
Emerging large-scale applications on Cloud computing platform, such as information retrieval, data mining, online business, and social network, are data- rather than computation-intensive. Storage system is one of the most critical
components for Cloud computing. The traditional hard disk drives (HDD) are current dominant storage devices in Clouds, but are notorious for long access latency and failure prone. The emerging storage class memory (SCM) such as Solid
State Drives provides a new promising storage solution of high bandwidth, low latency, and mechanical component free, but with inherent limitations of small capacity, short lifetime, and high cost. The objective of this project is to
build an innovative unified storage architecture (Unistore) with the co-existence and efficient integration of heterogeneous HDD and SCM devices for Cloud storage systems.
|
|||
3 | 2014-TTU-3 | Risk and Financial Analytics in Population Health | Ravi Vadapalli |
The main purpose of this project is to prototype readmission risk and insurance contract analytics This project involves deploying a sophisticated data warehouse for hosting, managing and processing large volumes of healthcare data from
disparate sources. The secondary objective is to prototype-advanced analytics that helps in risk stratification and assessing risk category transition in population health modeling. The tertiary objective is to model insurance contracts
and their business value.
|
|||
4 | 2014-TTU-4 | Cloud Standards, Security and Interoperability Testbeds | Alan Sill |
|
|||
5 | 2014-TTU-5 | Cloud Security, Federation, and Access Management | Alan Sill |
|
|||
6 | 2014-MSU-1 | Autonomic Performance Management Service Deployment in Federated Cloud Computing Systems | Srishti Shrivastava |
|
|||
7 | 2014-MSU-2 | Model Based Automated Security Management of Distributed Systems | Sheriff Abdelwahed |
|