Overall Goals and Approach
We will use the grand majority of the equipment to improve compute cluster capacity generally available to PIs across UCI schools and departments. This is a unique opportunity to demonstrate the value of central research cyber infrastructure and seek ongoing support for it.
We will also make a small subset of servers available to research groups that cannot use cluster computing in their research, but have the infrastructure and personnel to support them. We'll create a simple “request for proposal” process for this, with the Broadcom Compute Server Donation Coordination Group as advisors, and Jim Earthman and Dana Roode as decision points.
To facilitate future server equipment refreshes, we want to demonstrate to Broadcom that their donation is well utilized for research and/or educational programs. We will need to require some documentation about how processors are used, whether as a part of the shared cluster, or through dedicated use.
We will take a highly collaborative approach in providing staff support and will house the nodes across multiple campus locations. Entities providing operating support for nodes will receive appropriate “priority access” (to be defined). The locations we will focus on first are the OIT Academic Data Center, Computer Science Data Center, and the Physics machine room. ICS and Physical Sciences are offering space, racks and some staff/student support assistance. We may need to branch out to the other locations offered in Biological Sciences and Medical Sciences in the future, but will initially focus on the three sites indicated.
Our goal is to create a distributed compute cluster with campus-wide access, but exactly how this will work will depend on what our cluster technical team comes up with. The technical team thus far includes Joseph Farran, Harry Mangalam, Duncan Phillips, and Francisco Lopez from OIT; Hans Wunsch, and Eddie Stecker from ICS, and Domingos Begalli from Physical Sciences (Francisco Lopez is the contact point for adding additional participants).
Researchers in Biological Sciences, Engineering, School of Medicine, and elsewhere will have access to the servers in ICS and Physics, along with those housed in OIT. The precise allocation scheme, giving appropriate schools or researchers “priority access” in return for providing support assistance, must be engineered. “Priority access” will ensure supporting PIs or departments have access to processors when they need it, while at the same time making idle processors available to others on campus. The goal should be that at least 25% of processor time over the course of a month is made available for general use.
The Broadcom nodes will form the beginnings of a “UCI Grid” that we can also connect to the UC Grid. Policies and parameters for doing this must be discussed with UCI researchers at the appropriate time. Another set of 400+ used servers are being offered to the northern UC campuses from the Broadcom San Jose data center. It has been suggested that these nodes be placed on UC Grid as well.
MPC/Financial Support for OIT Housed Nodes
We will also use donated nodes to upgrade the public nodes in MPC that are less capable. Researchers who have nodes in MPC that are Broadcom upgrade candidates will be given the option to upgrade them for $300 each to help cover installation and infrastructure costs of the Broadcom project. PIs who upgrade older MPC nodes in this fashion will retain their existing dedicated access to them afterwards.
For $300 per node, researchers may also support the deployment of additional racks in OIT to house Broadcom servers. In this case the nodes will still be available to others on campus, but the contributing research group will receive the same “priority access” mentioned above. It remains to be determined whether or not the additional racks will be a part of MPC or form the basis of a new campus-wide cluster along with the servers deployed in Computer Science and Physics.
The Office of Research has been asked to provide some financial assistance for this project, including split funding a second dedicated OIT cluster administrator. However, budgets are tight, needs are many, and this may not be possible at this time. We must seek additional funding in the future to ensure adequate support for cluster computing, a critical research need. Our collaborative approach will get the nodes up and running, although we will need to make compromises along the way.
Assistant Vice Chancellor
Network and Academic Computing
Director of Client Services
Network & Academic Computing Services