Saeed Lab (https://saeedlab.cis.fiu.edu/) is located on the 2nd floor of FIU’s CASE building – CASE 261. Laboratory space available to Dr. Saeed’s lab. This space can accommodate 12 personals and has conference space and facilities. 

The research focus of the lab is design, develop and deploy scalable and efficient high-performance algorithms variety of scientific applications including big data genomics, proteomics and connectomics. Dr. Saeed has an independent dedicated lab space (approx. 1000 sq. ft) that is used by the students for research purposes. The lab carrels are equipped with desktop systems, machine with multicore processors and HPC servers with GPU’s whenever necessary. Further, the students have access to printers, storage devices, computer networking infrastructure, multiple conference spaces for collaborations, and essential research access (such as research paper and books). The students have access to multiple compute infrastructure needed to complete their projects.

The School of Computing and Information Sciences (SCIS) maintains a data center, research and instructional labs, and computer classroom facilities. These facilities are housed in the Engineering and Computer Science (ECS) and PG6 Tech Station (PG6) buildings on the Modesto A. Maidique Campus located in Miami, Fl. The facilities are maintained by a dedicated professional IT support staff as noted in the staffing section below. The School provides computing services such as file, compute, web, email, messaging, backup, print, and other computing services. Our networking services include a 10 Gigabit Ethernet core network that interconnects rack mounted switches and servers. All school desktop systems are connected by 1 Gigabit switched ports. Our network is highly redundant with multiple fiber and copper paths and is designed with routing fail-over capacity. We provide automated monitoring of our network and servers 24×7. The building subscribes to the university 802.11 WiFi network and SCIS maintains some research WiFi networks. Our network interconnects at 10GBs to the campus backbone, which provides a 40GBs connection to the NAP of the Americas to provide for connections to Internet, Internet2, Florida and National Lambda Rail, and CLARA (South American Research) networks. Our systems feature a variety of open source, commercial development and scientific software products from numerous vendors including IBM, Microsoft, ESRI, MathWorks, etc. We provide middleware technologies to support web services.

Local Lab HPC workstation: This workstation is a local workstation that is available to us. This workstation is primarily used for MPI/OPENMP/CUDA development and consists of 2 Intel® Xeon® Gold 6338 Processor (48M Cache, 2.00 GHz) with 32 cores for each processor. Further, 256GB of RAM, and 10TB of hard drive is available to this system. The workstation is also equipped with NVIDIA TITAN Xp GPU.

Local FPGA workstations: These servers are equipped with Intel Xeon processors and Intel DE-5 FPGA’s (Intel grant). Our servers are equipped with Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz with 22 cores each, and there are 2 processor on each server. with Intel Xeon processors with TITAN Xp Nvidia GPU’s (NVIDIA grant) and Intel DE-5 FPGA’s (Intel grant). Our servers are equipped with Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz with 22 cores each, and there are 2 processor on each server. Also the server is equipped with 128GB of RAM, and 10TB or hard-drive. The Terasic DE5-Net Stratix V GX FPGA is an ideal hardware solution for designs that demand high capacity and bandwidth memory interfacing, ultra-low latency communication, and power efficiency. The Stratix® V GX FPGA features integrated transceivers that transfer at a maximum of 12.5 Gbps, allowing the DE5-Net to be fully compliant with version 3.0 of the PCI Express standard, as well as allowing an ultra-low-latency, straight connections to four external 10G SFP+ modules. For designs that demand high capacity and high speed for memory and storage, the DE5-Net delivers with two independent banks of DDR3 SO-DIMM RAM, four independent banks of QDRII+ SRAM, high-speed parallel flash memory, and four SATA ports. The feature-set of the DE5-Net fully supports all high-intensity applications such as low-latency trading, cloud computing, high-performance computing, data acquisition, network processing, and signal processing. Our basic code design, development, and testing will be done on these machines. This machine will allow us to do our preliminary parallel algorithms with single machine for scalability analysis. Also the server is equipped with 128GB of RAM, and 10TB or hard-drive. Our basic code design, development, and testing will be done on these machines. This machine will allow us to do our preliminary parallel algorithms with single machine for scalability analysis. For multiple nodes with FPGA’s, we will acquire access to different national infrastructures

Local HPC (GPU equipped) Cluster Resources – we call it Dragon Cluster:  Dedicated HPC cluster is available to Saeed’s lab – consisting of memory distributed architecture equipped with multicore and GPU nodes. This cluster consists of 12 nodes where each node is equipped with 2 Intel Xeon Scalable Gold 6248, 2.5GHz (20-Core each), 12 x 32GB PC4-25600 3200MHz DDR4, 2 x Intel 1.6TB SSD. Further each node is equipped with 2 NVIDIA Quadro RTX 6000 GPU’s. All of these can be used for running our preliminary scalability experiments and results. We maintain an internal document on how to use Dragon Cluster which can be accessed here: https://saeedwiki.cis.fiu.edu/2022/06/03/how-to-use-slurm-on-dragon-cluster/

NSF XSEDE (GPU Equipped) supercomputers (Current PI Saeed award # TG-CCR150017) such as XSEDE Comet, Blue Waters (Cray XE/XK), FRONTERA (Intel “Sky Lake” Xeon processor/DDR-4/SSD HD) which can be accessed by research groups through call for allocations for compute-time and storage. Systems such as this will be ideal for scalability studies of the proposed algorithms and peta-scale results will be reported. We have additional award from XSEDE Supercomputing facility (Award # TG-ASC200004) which includes Comet-Supercomputers which is a homogenous computing facility with similar multicore, and manycore processors available. These include Intel Xeon E5-2680v3 processors, with 1728 processor cores, 72 nodes, 8TB of memory, 286 GB of memory per node, and 884 TFlops of peak-performance. We currently have access to 100,000 hours of compute time which is also extendible via extensions, and renewals. In addition to this we similar access to supercomputing machines which have a GPU attached to them. The Comet cluster consists of 1944 compute nodes where each node is equipped with 2 sockets 12 cores of Intel Xeon E5-2680v3 processor, 2 NUMA nodes 64GB DDR4 DRAM, compiler: GCC 7.2 running CentOS. The compute nodes are interconnected through a 56 Gbps FDR InfiniBand network. The allowed maximum number of nodes per job is 72 and the allowed maximum runtime per job is 48 hours.