Spacecc Information

From CVL Wiki

Revision as of 09:46, 25 March 2016 by Rbrand7 (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Space Compute Cluster is comprised of 1 head node server (spacecc-head) and 3 Dell Blade Enclosures, each with 16 Blade servers for a total of 48 blade machines. Each node has two 6-core Xeon E5645@2.40GHz CPU’s with hyperthreading for a total of 24 cores, and 36GB ram.

The head node is a standalone dell server that acts as the router, firewall and cluster install and management server. It also acts as an SSH proxy, where any ssh connections made to port 22 on are forwarded on to the internal cluster nodes in sequential round-robin order. The head node is not accessible by ECE users.

Spacecc nodes require an ECE account to access. If you do not have an ECE account, you can create one at and click on “Manage ECE Accounts”.

Using your ECE account, use the following command to ssh to, where you will land on one of the cluster nodes at “random”.

$ ssh -X <ECEUSER>

Once logged into a node machine, you can ssh into any of the other cluster nodes via the ip address or hostnames below

$ ssh or $ ssh blade-02-01

(hostname, internal IP) Enclosure 1: blade-01-[01-16], 10.0.1.[1-16] Enclosure 2: blade-02-[01-16], 10.0.1.[17-32] Enclosure 3: blade-03-[01-16], 10.0.1.[33-48]

All spacecc nodes are using the ECE central authentication and file server, so the user home directory is network mounted and goes with you to each machine. They are also mounting the ECE software share, /software, which includes all ECE packages like matlab, cadence, STK, etc.

OS and Linux machines come built-in with ssh in the Terminal, Windows machines should use MobaXerm

Personal tools