Difference between revisions of "User:Djgreen/Linux Administration"

From WolfTech
Jump to navigation Jump to search
m
Line 251: Line 251:
 
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
 
http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
  
(3:42:45 PM) slack: you can set user mounts in fstab, or just the automounter
+
*(3:42:45 PM) slack: you can set user mounts in fstab, or just the automounter
(3:42:50 PM) slack: s/just/use/
+
*(3:42:50 PM) slack: s/just/use/
(3:45:48 PM) slack: http://www.tuxfiles.org/linuxhelp/fstab.html
+
*(3:45:48 PM) slack: http://www.tuxfiles.org/linuxhelp/fstab.html
(3:46:06 PM) djgreen: fstab, as Micah explained it to me, required that we pre-create mount points for each user ahead of time
+
*(3:46:06 PM) djgreen: fstab, as Micah explained it to me, required that we pre-create mount points for each user ahead of time
(3:46:39 PM) slack: yes.  For a user to be able to use mount, the mount point and end point must pre-exist.
+
*(3:46:39 PM) slack: yes.  For a user to be able to use mount, the mount point and end point must pre-exist.
(3:46:59 PM) djgreen: and they have to own it
+
*(3:46:59 PM) djgreen: and they have to own it
(3:47:01 PM) slack: Otherwise, a user could remount /usr with their own binaries and own you
+
*(3:47:01 PM) slack: Otherwise, a user could remount /usr with their own binaries and own you
(3:47:15 PM) djgreen: so how does one automate that?
+
*(3:47:15 PM) djgreen: so how does one automate that?
(3:48:02 PM) slack: with the automounter...which is how /ncsu works
+
*(3:48:02 PM) slack: with the automounter...which is how /ncsu works
(3:48:14 PM) djgreen: and since you have multiple people logging into the same box... ok.
+
*(3:48:14 PM) djgreen: and since you have multiple people logging into the same box... ok.
(3:48:22 PM) slack: /etc/auto.master set's up /ncsu and points it an an LDAP map
+
*(3:48:22 PM) slack: /etc/auto.master set's up /ncsu and points it an an LDAP map
(3:48:39 PM) djgreen: you have a link to point me at for automount doc?
+
*(3:48:39 PM) djgreen: you have a link to point me at for automount doc?
(3:48:43 PM) slack: although you can have a static, file based map on the machine for ease of use
+
*(3:48:43 PM) slack: although you can have a static, file based map on the machine for ease of use
(3:48:48 PM) slack: let me see
+
*(3:48:48 PM) slack: let me see
(3:49:09 PM) djgreen: for celerra, I'll need a good number of diff mount points on each machine.
+
*(3:49:09 PM) djgreen: for celerra, I'll need a good number of diff mount points on each machine.
(3:49:50 PM) djgreen: but if I can make a list as you've mentioned and just push that to all, it could work.  
+
*(3:49:50 PM) djgreen: but if I can make a list as you've mentioned and just push that to all, it could work.  
(3:50:04 PM) slack: *nod*
+
*(3:50:04 PM) slack: *nod*
(3:50:42 PM) djgreen: though I'm not sure what happens when there's 50 mount points and user only has privs to 1 or 2
+
*(3:50:42 PM) djgreen: though I'm not sure what happens when there's 50 mount points and user only has privs to 1 or 2
(3:51:22 PM) slack: then they can only get to 1 or 2
+
*(3:51:22 PM) slack: then they can only get to 1 or 2
(3:51:41 PM) djgreen: but does that cause errors/delays due to retries, etc?
+
*(3:51:41 PM) djgreen: but does that cause errors/delays due to retries, etc?
(3:51:45 PM) slack: with the automounter you can easily set it so you can see everything under, say, /ncsu or you only see directories there after you access them
+
*(3:51:45 PM) slack: with the automounter you can easily set it so you can see everything under, say, /ncsu or you only see directories there after you access them
(3:52:28 PM) slack: if the automounter can't mount the share its been told to, yes
+
*(3:52:28 PM) slack: if the automounter can't mount the share its been told to, yes
(3:53:02 PM) djgreen: not sure how well it'll work with the celerra cifs paths then...
+
*(3:53:02 PM) djgreen: not sure how well it'll work with the celerra cifs paths then...
(3:53:04 PM) slack: http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
+
*(3:53:04 PM) slack: http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
(3:53:53 PM) slack: wouldn't we all like to know a little more about the celerra and cifs and the plan for doom?
+
*(3:53:53 PM) slack: wouldn't we all like to know a little more about the celerra and cifs and the plan for doom?
(3:54:18 PM) slack: if the krb auth fails it may fail gracefully, but I've nothing to test to make sure
+
*(3:54:18 PM) slack: if the krb auth fails it may fail gracefully, but I've nothing to test to make sure

Revision as of 22:26, 10 September 2010

Prep Machine for KickStart

  • Make sure hostname has config file in /afs/bp/system/config/linux-kickstart/configs/ece
  • Make sure host is using PXE-all DHCP template in QIP.

Rename Linux Boxes

If the machine is using dhcp I think you just need to switch in qip and reboot into the new lease / IP / hostname.

If the machine has a static configuration, you need to edit:

/etc/sysconfig/network-scripts/ifcfg-eth0

and

/etc/sysconfig/network

with the new values.

Try editing these files:

/etc/sysconfig/network

and

/etc/hosts

The hostname should be stored there.

After your box is on the network as the new name, run this command as root:

/usr/sbin/rhnreg_ks --force --activationkey <your_key_here>

Where <your_key_here> comes from the "activationkey" line in the web kickstart file. This will create a new "object" in red hat network.


Get Login Logs

in Linux, they have the /var/log/wtmp file but it is a binary file format. So the recommended interface is to use the last command.

last -n 100 will show the last 100 people who logged in.

You can also specify alternate wtmp files like

last -n 100 -f /var/log/wtmp.1

Controlling Access

It is possible to use pts groups to control access to Realm Linux.

cluster <cell> <PTS group>

is what goes in the config file. You can also hand edit /etc/update.conf if you don't want to re-install a box. It should look like this:

users blah XXXXXXXXXXXXXXXXXXXXXXXXXXX root blah XXXXXXXXXXXXXXXXXXXXXXXXXXX cluster> eos itecs-admin:helpdesk

where you replace the pts group with what group you want to use. If there is more than one pts group then you just add more cluster> lines.

Let us know if you need any further information. see also:

https://secure.linux.ncsu.edu/moin/Realm%20Linux%20Administrators%27%20Guide/Controlling%20User%20Access

Remote Reinstall of Existing RHEL box

(11:45:53 AM) gsgatlin: djgreen: You can edit /boot/grub/grub.conf and change the default boot item to re-install this workstation.

(11:47:56 AM) gsgatlin: it starts numbering at 0 so if re-install is the first item it would be default=0, then you reboot and it starts installing.

Access Local Commandline

When the linux box is freezing during the boot cycle, you can bring up a cmdline by:

  1. At boot, press any key to trigger kernel selection screen.
  2. Click "p" then enter passwd for root account.
  3. Select the kernel you want, then click "a" to add to boot parameters.
  4. Add "single" without the quotes to the end and hit enter.

That will boot into runlevel 1.

Mount an NTFS drive

  • mount -t ntfs-3g <device> <mount location>
    • mounts drive to specified location. if connected by usb, drive will be under /dev/sdc, connected internally should be /dev/sda, though you'll likely have to mount a particular partition, like /dev/sdc2, etc
    • the mount location must be an existing folder, so if you want to mount to /mnt/windows, you will have to create that windows folder under /mnt first
    • example, mount -t ntfs-3g /dev/sdc2 /mnt, mounts the second partition of a usb ntfs drive under /mnt
  • dmesg | tail -##
    • gives you the last ## entries from the message buffer. Can use this to figure out if the drive is connected and where.
  • cp -vau <source> <destination>
    • for copying files off the drive. v is for verbose, a for archive, u for update. The destination must be an existing folder.
      • don't know if it will stop if it hits an error
    • example: (assuming the drive is mounted to /mnt) cp -vau /mnt/bob /local/backup, copies the bob folder from the drive and anything in it to /local/backup

Add users individually

sudo vi /etc/users.local.base

sudo vi /etc/users.local

  • add their unity username

Add sudo

sudo su - visudo

  • add as "username ALL=(ALL) ALL"
  • any sudo users must also have been added to the two users.local files as well

Local Home Dir

[10:14] <djgreen> Micah -- anything I need to know about creating local user home folders (/home/*) in Linux? [10:15] <macolon> Umm... make sure that /home is a symlink to /local/home, aside from that, can't really think of anything. [10:16] <djgreen> how about perms? [10:17] <djgreen> just chown to the user and I'm ok?

[10:17] <macolon> mkdir directory; chmod 700 directory; chown username.ncsu directory

[10:17] <macolon> Assuming that no one else is supposed to access the dir. [10:18] <macolon> if it chokes on chown username.ncsu directory -- two steps then: chown username dir; chgrp ncsu dir

[11:09] <djgreen> Micah -- is there a way for someone to have /home/userid be set as their "homedir" when they login to a machine? Rather than their regular AFS home dir? [11:11] <macolon> setenv HOME /home/userid in .mycshrc

Repair DotFiles

[10:20] <elliot> /usr/bin/repair_dotfiles.sh will do it for linux [10:21] <elliot> make sure they run that from the root of their home directory

Chinese Fonts

> 01157197 - any idea about adding the Chinese language pack? Is this just another "yum" install?

I'm not sure. Try this:

/usr/bin/yum install fonts-chinese

And see if it fixes his problem. That package isn't installed by default in RHEL 5 it seems. It contains Chinese TrueType Fonts which will hopefully fix the issue. Let me know if that fixes it? If so we might want

to add it to the Eos lab machines.

Administrative Scripts

> How did you go about doing that? I'd like to be able to replicate if you're n ot around.

I've been using realm-crons, which have the option of being run either every 20 minutes, hour, day, or month. You can take a look at some of the scripts I've used for random stuff in:

/afs/bp/system/i386_linux3/athenan/adm.ece

A repository of scripts is in the scripts directory, while things that actually run are in one of the cron.* directories. The names are pretty self explanatory, with the exception of wsr, which is the 20 minute cron job.

To avoid killing AFS, all crons are setup to have a random wait before running, with a maximum time of 20 minutes, so doing things at an EXACT instant is not generally feasible -- though with a couple of combos of scripts it could be done -- but its would be messy.

Nvidia and X

> P.S. Should I just use nVidia's installer, or is it packaged nicely for > RHEL 5/Realm Kit somewhere?

Normally the open source nv driver is more stable than the binary only nvidia. However, RHEL5 isn't exactly new anymore. An updated X driver would be the thing to try.

livna.org has packages for Fedora. Ah...

   http://atrpms.net/dist/el5/

has the nvidia packages for RHEL 5.

Jack


---

Step 1. Download the Drivers The easiest method to downloading the latest drivers is by following the directions on Nvidia’s own driver download page. If you are in the mood to try out beta or older versions of the drivers, check out this page.

Step 2. Kill the X Server The Nvidia installer will complain if you try to install new drivers while the X server (a.k.a. all the graphic user interface stuff) is running. So, you’ll have to jump to a new session by hitting Ctrl+Alt+F1. This will bring you down to a text-only terminal. Login if it asks you to.

Now, GNOME (which uses gdm) users will usually enter this to stop the X Server:

sudo /etc/init.d/gdm stopAs for KDE (kdm) users:

sudo /etc/init.d/kdm stopStep 3. Start the Driver Installer Navigate to the directory where the driver installer downloaded to. For me, this was /home/eddie/Downloads:

cd ~/DownloadsNow, you must have root permissions to install new drivers (because it ties itself in with the kernel), so make sure you either switch to the root user or use sudo (recommended) before running the installer:

sudo sh ./NVIDIA-Linux-x86_64-195.36.15-pkg2.runNOTE: Remember to use the name of the driver file you downloaded, not the one above.

Step 4. Follow the Installer’s Instructions The installer should ask you a few questions as it installs the new drivers. It is usually safe (and recommended by myself) to say yes to all of the questions asked (install 32-bit OpenGL libraries, create a fresh Xorg.conf, etc.). After the questions, sit back and let the installer finish.

Step 5. Reboot and Enjoy And now you are done! Reboot and enjoy the up-to-date drivers:

sudo rebootTroubleshooting and How to Handle Errors In this section, I will describe the methods I used to work around a few of the issues I have encountered when installing new drivers. (I will update this section whenever a new problem arises!)

1. “Provided install script failed”

If you run Ubuntu, then you will see this everytime you try to install a new driver. Just ignore it, the install script provided by the Ubuntu developers fails on purpose.

2. Error locating kernel source

If you are like me and have compiled your own custom kernel, this problem will probably affect you. If you do not run a custom kernel, and use the default, distribution provided kernel, then you probably do not the kernel headers installed. On Ubuntu, this is simple to fix:

sudo apt-get install kernel-sourceBut if you ARE on a custom kernel, or you have the correct kernel headers installed but it still cannot find them, append the –kernel-source-path option on to the installer command. Kernel headers are usually located in the /usr/src directory. In my case, the command I use to start the 195.36.15 driver installer is:

sudo sh ./NVIDIA-Linux-x86_64-195.36.15-pkg2.run --kernel-source-path=/usr/src/linux-headers-2.6.32-bfs311-idlesoft-desktop-amd64/


---

http://http.download.nvidia.com/solaris/1.0-8178/README/appendix-g.html

Clean up error reports

/usr/bin/run-realm-cron wsr ; /usr/bin/run-parts /etc/cron.wsr

Dan,

Best way to make it stop is to clean up the /etc/aliases file.

In RL6 there is a script that keeps the local-mail-users table up dated with whatever you dump into aliases and any local accounts that are created. Normaly clients don't listen to port 25 other than from localhost. Servers, however, have minial sendmail configuration. Its was generally assumed when I wrote the configs that server know what they are doing and need to be able to receive the bounces they send.

Jack

Mounting Drives

http://www.tuxfiles.org/linuxhelp/fstab.html http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html

  • (3:42:45 PM) slack: you can set user mounts in fstab, or just the automounter
  • (3:42:50 PM) slack: s/just/use/
  • (3:45:48 PM) slack: http://www.tuxfiles.org/linuxhelp/fstab.html
  • (3:46:06 PM) djgreen: fstab, as Micah explained it to me, required that we pre-create mount points for each user ahead of time
  • (3:46:39 PM) slack: yes. For a user to be able to use mount, the mount point and end point must pre-exist.
  • (3:46:59 PM) djgreen: and they have to own it
  • (3:47:01 PM) slack: Otherwise, a user could remount /usr with their own binaries and own you
  • (3:47:15 PM) djgreen: so how does one automate that?
  • (3:48:02 PM) slack: with the automounter...which is how /ncsu works
  • (3:48:14 PM) djgreen: and since you have multiple people logging into the same box... ok.
  • (3:48:22 PM) slack: /etc/auto.master set's up /ncsu and points it an an LDAP map
  • (3:48:39 PM) djgreen: you have a link to point me at for automount doc?
  • (3:48:43 PM) slack: although you can have a static, file based map on the machine for ease of use
  • (3:48:48 PM) slack: let me see
  • (3:49:09 PM) djgreen: for celerra, I'll need a good number of diff mount points on each machine.
  • (3:49:50 PM) djgreen: but if I can make a list as you've mentioned and just push that to all, it could work.
  • (3:50:04 PM) slack: *nod*
  • (3:50:42 PM) djgreen: though I'm not sure what happens when there's 50 mount points and user only has privs to 1 or 2
  • (3:51:22 PM) slack: then they can only get to 1 or 2
  • (3:51:41 PM) djgreen: but does that cause errors/delays due to retries, etc?
  • (3:51:45 PM) slack: with the automounter you can easily set it so you can see everything under, say, /ncsu or you only see directories there after you access them
  • (3:52:28 PM) slack: if the automounter can't mount the share its been told to, yes
  • (3:53:02 PM) djgreen: not sure how well it'll work with the celerra cifs paths then...
  • (3:53:04 PM) slack: http://www.redhat.com/docs/manuals/enterprise/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-autofs.html
  • (3:53:53 PM) slack: wouldn't we all like to know a little more about the celerra and cifs and the plan for doom?
  • (3:54:18 PM) slack: if the krb auth fails it may fail gracefully, but I've nothing to test to make sure