User:Rewood/AFS Myths

From WolfTech
< User:Rewood
Revision as of 12:27, 14 March 2006 by Djgreen (talk | contribs)
Jump to navigation Jump to search

==Myth #1: With regard to the AFS Cells, Unity=ITD, Eos=Engineering, BP=Bobby Pham==

RealDeal(tm) #1: Today, May/2002 - the cell names don't mean much with regard to the organizations that manage them.

Generally today: BP = software installs, Unix system management Unity=Almost all user volumes, ITD managed "space" Eos=Pams managed "space", CSC paid for/ITD managed "space", WolfWare, some long-time user volumes, Legacy ITECS managed "space", Engineering AFS Lockers

BP=Bobby Pham is NCSU urban legend. It's only a fun thing to talk about, complain about, whatever you want to do with it.

Myth #2: "ITECS manages the EOS [afs cell] AFS file servers."

RealDeal(tm) #2: No we don't. Not all of them anyway. ITD manages many of them for WolfWare and other volumes. PAMS manages AFS servers in /afs/eos. There's a CSC fileserver currently in /afs/eos managed by ITD. ITD manages all the authentication, volume locaation, and backup database servers (all lumped together under the term "DB servers")

ALL our file servers in /afs/eos are named engr##f.eos.ncsu.edu.

If a vos examine doesn't show a volume to be on one of those boxes, it's not our space.

Myth #3: All Unity AFS File servers are managed by ITD.

RealDeal(tm) #3: Nope - we have a user file server in Unity

Our /afs/unity file server is engr00uf.eos.ncsu.edu, subsequent /afs/unity file servers will be engr##uf.eos.ncsu.edu.

It's for Engineering admin/faculty/staff (not student) user volumes that need additional quota over and above 50MB.

==Myth #4: Squirrels are out to take over the Power infrastructure at NC State.==

RealDeal(tm) #4: This one is true. Except now they are hiring rats to do the suicide missions.


More information than you ever wanted to know:

Regarding Myth #1:

Things get a little confusing with ITECS/COE Labs called "Eos" labs and ITD Labs called "Unity" labs - and there still being some lingering, separation in both term and practice between groups - but erase from your mind that this extends to the AFS cells.

For future reference - we are calling all ID's "UnityID's" as opposed to "Unity/Eos ID's" because that's what they are - Campus Wide login identifiers.

And "Eos" is becoming the umbrella term for all computing in ITECS/COE. Not limited to Realm or Unix, or labs, or whatever.

Regarding Myth #2 and #3:

From this point forward:

  All new Engineering User volumes are in the Unity cell 
  with everyone else.  If a Engineering user (not a student) 
  needs > 50MB - they get moved to our file server (server(S) 
  if necessary) and the quota bumped.  We very likely will 
  be gently moving *over time* User volumes under our purview
  from /afs/eos to /afs/unity 
  All Engineering AFS space allocations come under our 
  structured "Locker model" (standard volume names, mounts, 
  separate mounts for server and user access, system:anyuser 
  ACLs eradicated..., more technical details forthcoming).
  This means we can find stuff - and know who owns things
  and ask them to "renew" their locker every year ala
  MajorDomo2.  And we aren't charging for space - yet.
  We are running most AFS locker management through scripts
  built on top of an in-house written AFS perl module (the 
  "official" perl module is missing some of the command 
  suite - and the wolfware api modules weren't available at
  the time we started - and what little we have works 
  cross-platform for the most parts, win32, solaris, linux) 
  Quota updates, moves, mounts, new locker creation, 
  permission setting, auditing, etc. are going to get 
  wrapped in a script, stay as best as we can within
  spec, and logged.  This is why we are little ornery
  about staying out of sections of AFS.
  All Engineering AFS space will be mounted at /afs/eos/lockers/research,
  eos/lockers/admin, eos/lockers/workspace, eos/lockers/people,
  eos/engrwww, and eos/engrservers (and any new locker types we need to
  have end up in eos/lockers/somethingorother).  EVERYTHING on our 
  servers that's not a user volume will be migrated into those 
  namespaces by the end of this year.