Technical Staff Blog

Last update on .

The dog days of summer are upon us, bringing lethargy, fever and mad dogs.  Meanwhile, the technical staff is bringing big changes to our computer systems.  Students returning in the fall will find two upgraded OSes, and a new filesystem.  We are adding more GPU power to our compute cluster, and a new online discussion forum.

blog-banner.png Debian 10 "Buster"

As of this writing, nearly all end-user linux systems are running the latest Debian release, code-named "Buster."  Buster was released on July 6th, and so far it appears to be a smooth transition.  For a quick look at what's new, Debian offers a wiki page.  To see all the details, see the release notes.  Perhaps the biggest change is that AppArmor, a linux security kernel module that restricts what applications are able to do, is turned on by default.  Buster, by the way, was the dachsund in Toy Story 2.

MacOS 10.13 "High Sierra"

All CS department managed macs are now running Apple's last release, High Sierra.  That includes the macs in the MLab (CIT room 167),  and the copy alcove macs.  The tstaff has turned our attention to Mojave (10.14) Apple's current release, and to Catalina (10.15) the next release.  Apple does keep us busy.

Filesystem Transition

The CS Department's shared filesystem is a mostly invisible service (when it is working well, that is).  It provides nearly half a petabyte of high-performance, backed-up storage to students, researchers, and a wide variety of essential services.  Since 2011, these file services have been supplied by a cluster of file servers running IBM's Spectrum Scale (née GPFS), and delivered to end users via NFS and CIFS.

CIS provides similar file services, employing a Dell EMC Isilon clustered storage system.  CIS has generously offered to meet CS file service needs on this platform, and this summer we are (at last) prepared to make the transition.  Behind the scenes, this has been a major long-term project of the technical staff, as we have had to resolve a plethora of technical issues.

NFSv3 -> NFSv4

As we move to Isilon storage on NFS, we will also move from NFSv3 to NFSv4.  Two aspects of this transition are particularly impactful to our community

  • Kerberos credentials are required for file access, and kerberos credentials expire.
  • NFSv3 uses POSIX ACLs, NFSv4 uses its own ACLs, and they are incompatible.

Users acquire kerberos credentials when they log into our systems, either on console or via ssh, and also when they unlock a screen saver.  So most users will have credentials for filesystem access whenever they need them.  But users who start long-running jobs, who use the compute cluster (aka grid), or who set up cron jobs either do not have credentials when their processes start, or risk having them expire before they are done.  To address this problem, the technical staff has created a mostly invisible system to create and preserve user credentials as needed.  We will make more information on this available shortly.

Existing files and directories with (POSIX) ACLs, will have their ACLs converted to NFSv4 ACLs as part of the transition.  Users who use ACLs will need to get used to using NFSv4 ACLs going forward.  Note that it is not possible to convert NFSv4 acls to POSIX acls in an automated way, so this is a one-way transition.

Plan B

Big changes like this can have unexpected issues.  Despite our best efforts, there are always "unknown unknowns."  As the semester approaches, we are prepared to revert, in whole or in part, to our old filesystem if these new file services do not meet our needs.  We ask that our user community bear with us and be prepared for a few bumps along the way.


RTXTitan_TechShot.jpg This summer the CS Department purchased five new GPU servers for our compute cluster.  These are all general-use machines, available to all CS Department users.  The servers add 36 new NVidia GPUs as follows:

The GTX1080ti and RTX2080ti GPUs have 11G or VRAM each, while the Titan RTX systems each offer 24G of VRAM.  Details of the server hardware are (or will shortly be) described on the grid resources page.


Students asked us for a better way (than email) to share tips and discuss course and system issues.  We've chosen discourse, a modern open source discussion forum software application that is up and running in-house.  More details will be available soon.