Showing posts with label guide. Show all posts
Showing posts with label guide. Show all posts

Tuesday, January 11, 2011

Ubuntu and OpenVPN FQDN Problems.

I have been having some weird problems with OpenVPN on my clean Ubuntu 10.10 (desktop) install whenever I connect to a remote LAN using OpenVPN. The remote LAN hosts a number of servers, all located under the domain company.local (example). Even if it connects and I can ping the remote machine that is called testserver, I can NOT ping the same server through its fully qualified domain name (FQDN) testserver.company.local. Ping would fail with the following error:

ping: unknown host testserver.company.local

Trying to lookup the FQDN with nslookup works just fine! Strange!

Now it appears that the reason is something called a MDNS (Multicast DNS) which kicks in and handles (by default) all .local domains. Looking at the file /etc/nsswitch.conf I found a line that looks like this:

hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4

Apparently the '.local' extension is not sent to the DNS server for resolution if the MDNS system cant resolve it. So I went ahead and changed the line to:

hosts: files mdns4_minimal dns mdns4

I went back to the shell and ran ping testserver.company.local again, and lo-and-behold it works!

Software used:
  • NetworkManager
  • NetworkManager-OpenVPN
  • OpenVPN
  • Ubuntu 10.10 Desktop

Tuesday, October 19, 2010

Ubuntu PHP, PDO and MySQL

Note to self:
In order for the PDO MySQL driver to get installed, you need to install the php5-mysql package.

sudo aptitude install php5-mysql

Friday, November 20, 2009

SWAT Without Root User

Ubuntu and other debian based distros typically haven't got a root user on the system, since its expected that system administrator tasks happen through the sudo command.

SWAT however, by default, expects you to log in with the root user to make it possible to change any settings. A very nice discovery i found by googling a while, is that the actual user doesn't matter, as long as the user you log in as have write access to the samba configuration file.

Adding my own normal user to a new "admin" group, and giving that group write access to /etc/samba/smb.conf made everything work without creating a root user for the entire system.

Wednesday, June 17, 2009

TeamCity Build Agent on Vista

I had some problems getting my Windows Vista PC to run as a build agent for TeamCity. According to the documentation it should be a pretty straight forward installation procedure. However, after I had installed my new agent never showed up on the server waiting to be authorized as expected.

Examining the logs in the build agents directory, I found an error message:

Unable to ping agent Andromeda. Please check firewall on agent machine

(Andromeda is the name of the Vista PC)

Well, I tried turning off the firewall altogether without any effect.
As it turns out this is a problem due to the fact that the PC has multiple IP addresses and the agent starts listening to the "wrong" one by default. You can override the IP address by specifying it during the install procedure, or later by changing the config file.
Adding this line to the config file solved my problems:

ownAddress=192.168.90.79

After a short while the build agent showed up on the server, and after authorizing it, everything worked as expected. Hujja for TeamCity!

Monday, May 11, 2009

TeamCity, Ant & DITA

Over the last few days I have been trying to get a CI server (aka build server) up and running. After trying out both CruiseControl and CruiseControl.NET I landed on TeamCity since it is a bit simpler with a decent web GUI as well as being somewhat familiar from my work.

You can download TeamCity and try it out for yourself since they offer a free edition that supports up to 20 build configurations and 3 build agents.

One of the tasks I wished to accomplish in by CI server was to build the online help files (HTML) as well as the user manual (PDF). Having all the source material in DITA formatted XML files makes that pretty easy from the command line using the DITA Open Toolkit full easy install distribution. Getting it to run automatically from TeamCity proved to cost me some gray hairs though.

So as a note-to-self here is a short checklist of what to do after TeamCity has been installed:
  1. Download DITA and unzip it. (Ex in: C:\DITA-OT1.4.3)
  2. Create a new build configuration for you project, and set "Ant" as the "Build runner".
  3. "Ant Home" should be the ant folder in your DITA installation. (Ex: c:\DITA-OT1.4.3\tools\ant\)
  4. In "Additional Ant Command Line Options" input the classpath and any options for Ant or the DITA toolchain it spawns.
  5. Add DITA_DIR environment variable to the build configuration and set it to the folder you unzipped DITA to in step 1.
  6. Create a build.xml file for your targets and point TeamCity at it.
Note!
Setting the CLASSPATH environment variable had NO effect, neither had adding -classpath to the JVM command line options. Only adding it as a -cp option to the Ant command line did the trick. I did not find any mention of this in either DITA documentation or TeamCity documentation.
It might be obvious for someone more familiar with Ant, but for me it was a big hurdle I only discovered while (desperately) searching trought the Ant plugin configuration on my TeamCity server.

Sunday, August 3, 2008

Konsole colors in Kubuntu

Switching on colors in the KDE Konsole is not fortunately very easy:
  • Start the Konsole application
  • Choose from the menu: Settings -> Edit Current Profile
  • Click the "Environment: Edit..." button on the General tab
  • Make sure the TERM variable is set to
    xterm-color like so: TERM=xterm-color
  • Click "OK" in both the dialogs to apply changes
  • Restart the Konsole application to see the effect it had

Monday, November 5, 2007

Resizing a RAID1 system partition.

Scenario: I have a Linux server (running Gentoo) with 2x 80 GB drives in a software RAID1 configuration that is used as the system drives to boot the server. I wanted to upgrade the drives and swap in some bigger 160 GB drives I had laying around. So, how to do it?

PS! Command syntax is for example only and is not necessarily correct nor complete.
  1. Make sure grub is installed on both drives so the system will still boot if one drive is missing.
  2. Fail all partitions from one of the drives: mdadm --manage /dev/md3 --fail /dev/hdc3
  3. Remove the drive from the array: mdadm --manage /dev/md3 --remove /dev/hdc3
  4. Shutdown server, and replace the drive with a bigger one.
  5. After reboot, create the same number of partitions that was on the old drive. Each partition must be at least as big as the one it is replacing.
  6. Add the new partitions to the array: mdadm --manage /dev/md3 --add /dev/hdc3
  7. Let them fester.. i mean rebuild. You can check the status by: cat /proc/mdstat
  8. Once finished rebuilding, repeat step 2-7 for the other drive.
  9. I now had an identical array to the one on the old drives, except i have space to resize the filesystem to make use of any extra space on the new partition scheme.
  10. First we need to make the array partition make use of all the available storage on the harddrive partitions: mdadm --grow /dev/md3 --size=max
  11. Online resizing doesn't work for mounted filesystems, and the system disks can't be unmounted while in use. So, reboot into a rescue CD. I chose gParted.
  12. Then re-created the raid array with the second drive missing. mdadm --create /dev/md3 --level=1 --raid-devices=2 /dev/hdc3 missing
  13. Ran fsck -n /dev/md3
  14. Removed the journaling from my ext3 filesystem (making it into an ext2 fs basically): tune2fs -O ^has_journal /dev/md3
  15. Ran e2fsck -f /dev/md3
  16. Ran resize2fs /dev/md3 and waited a fair amount of time.
  17. Once more ran fsck -n /dev/md3
  18. Before re-enabling the journaling: tune2fs -j /dev/md3
  19. Just to be sure, i rebooted into gParted once more, and created the array again just as in step 12.
  20. I then added the second drive to the array: mdadm --manage /dev/md3 --add /dev/hda3
  21. Sat back and watched: watch -n 1 'cat /proc/mdstat' for a long time while the second drive got rebuildt with the new filesystem size.
As a last step, I edited /etc/mdadm.conf on the system disk before I rebooted. Not really sure if it was needed, but didn't hurt.
Before I edited the file it did not contain any settings. I basically just added definitions for the arrays i have on my system disks, and listed the partitions to use for each of them.

Update:
You might wonder why I did not add both drives to the array BEFORE I resized the file system. And yes, you can indeed do this. It would probably even be faster! But: REMEBER TO MAKE A BACKUP! With my approach I had the second drive as a backup in case something went boom on the first drive.