Sunday, February 28, 2010

Debian 12

In Linux, 10 fields are used to represent the file type (1-field), owner permissions (3-fields), group permissions (3-fields) and other permissions (3-fields).
Typically permission looks like this:

As mentioned before, the first bit represents the file type. Here's a list of the file types:
- File
d Directory
l Link
c Character Device (/dev/)
b Block Device (/dev/)

The fields are represented as shown:

Where t is the type, u for user, g for group, and o for others. For each permission type (u/g/o), the first bit is for read, second bit is for write, third bit is for execute.

When the read bit is set, a user can read a file. Otherwise, all access will be denied. When the write bit is set, a user can modify a file. When the execute bit is set, the user will be able to execute it from shell.

rwx can be represented in binary, using r as the MSB. 7 is the full permission for each permission type.

To set something like -rwxr-xr-x, we convert it into (111)(101)(101), then run:
chmod 755 test.txt

755 is the maximum permission minus the default UMASK. UMASK is used to disable permissions for directories by default. Normally, the default UMASK is 022. This way, every directory created will not be modifiable by the group or other users, but can be read and executed. To check the default UMASK, type:

By default, all files are 644. Which means execution and modification is not permitted by groups and others default.

For a directory to be accessible, you need to set its execute bit.

To change a file's permission, you must be root or the owner.

You can also change permissions alphabetically. For example, we want to make the file executable by everyone:
chmod ugo+x test.txt

To change owner of a file, use:
chown root test.txt

To recursively change ownership, use:
chown -R root ./temp/

Only the root user can change owners. Users should use chgrp to change the group.

UID permits the execution of an object under the permissions of the owner. For example, we can change our passwords using passwd. However, the /etc/passwd file is only modifiable by root.

What is going on here is that when we run the passwd file, it temporarily gives us root permission. What gives us this permission is the UID settings of the file. For example, if we have a file that lists the contents of /sbin/, it can only be run by the root user. However, we can give it the UID flag through:
chmod uo+sx binaryfilename

s sets the UID and x sets the executable flag. This way, the owner's permissions will be given to the one running the file. UID and GID (g+sx) only works for binary executable files.

The sticky bit is an access-flag that can be applied to files and directories in *nix systems. When the sticky bit is set on a folder, files and directories in it can only be modified by their owners, the owner of the directory, or the superuser. Without the sticky bit set for the folder, anyone with modify permissions for the folder will be able to delete anything in the folder.

To set the sticky bit, use:
chmod ugo+rwxt folder

By default the /tmp directory has the sticky bit set to prevent other users from messing around with other user's files.

Symbolic Links are similar to Windows shortcuts with additional features. There are two types of Symbolic Links: Soft Link and Hard Link. Soft Links are pointers to a named-instance of a file. Hard Links are pointers to inodes representing a file. Inodes are filesystem allocation units. In other words, Hard Link is multiple ways to refer to the same file, while a Soft Link is a shortcut to a file.

To do a Soft Link, use:
ln -s source target

Links can be deleted without affecting the original file on the hard disk. You can create Soft Links without specifying the target. It will automatically use the original file name.

Soft Links are able to traverse file systems. Hardlinks on the other hand cannot traverse file systems. To create a hard link, omit the -s parameter. File systems refer to partitions and hard disks.

To reveal the inode information during listing, use:
ls -li

inode is equivalent to the default allocation block in Windows.

Debian 11

Debian comes with many open source system utilities that are universal across most Linux distributions. At many times, we want to find out information about the CPU, RAM and the HDD. The first utility we're going to look at is the ps utility.
ps tells us information about processes running. A common way to run ps is:
ps -aux

The GUI equivalent of ps is:

From ps or gnome-system-monitor, you can see the processes associated with users. ps without parameters returns the processes spawned by a given shell. ps -ef returns all processes for all users, which is similar to -aux.

A non-privileged user can only terminate his/her own processes. Root, however, can terminate anyone's processes. When a non-privileged user kills another user's process, he'll be prompted for the root password.

Another system resource utility is the top program. Top returns the highest utilizers of the CPU, and it refreshes automatically:

Top consolidates the output from free, uptime, vmstat and ps programs.

You can stop top after 3 refreshes through:
top -n 3

You can monitor specific PIDs through:
top -p 500, 505, 510

You can also monitor processes from specific users through:
top -u kelvin

You can have a combination of the above parameters to monitor specific PIDs from a specific users for a number of iterations, and so on.

To see information about mountpoints and partitions, use:

By default it shows the information in "blocks", which is 1024 bytes each. To see human readable results, use:
df -h

There are two ways to manage users and groups - The GUI and the CLI. From the CLI, you can add users using:
useradd kelvin2

User account database is stored in /etc/passwd, and the hashed passwords are stored in /etc/shadow

A line in passwd looks like:

The first field is the login name, which is root.
The second field is a reference to /etc/shadow ('x').
The third field is the UID, which ranges from 0-999.
The fourth field is the GID (group ID) which also ranges from 0-999.
The fifth field is the full name of the user, which in this case is root as well.
The sixth field is the $HOME directory.
The last field is the default shell, which is bash.

A line in shadow looks like this:

The first field is the login name.
The second field is the hashed password.
The third field is the days since Linux epoch (Jan 1, 1970) since password was last changed.
The fourth field is the days before password may be changed.
The fifth field is the maximum password age in days.
The sixth field is the number of days before password will expire.
The seventh field is the number of days after expiration that account will be disabled.
The last field is the number of days since Unix epoch since the account was disabled.

To delete a user, use:
userdel kelvin2

To reveal package membership of a program, use:
dpkg -S /usr/sbin/useradd

To modify an existing user, use:
usermod -s /bin/bash linuxcbt2

The above example uses the -s parameter which specifies the shell. Use man usermod to check for the parameters you can use.

A GUI tool for user management is:

By default, it only shows non-system accounts (non-privileged users only). Each user may belong to 1 and only 1 primary group, but can belong to multiple secondary group.

To add a group, use:
groupadd engineering

To add a user into a group, use:
usermod -G engineering kelvin2

Similarly to userdel, to delete a group, use:
groupdel engineering

Debian 10

Perl is available in every *nix systems. It is also available in Windows and MAC computers as well. Perl is a scripting language and manipulates text and binary data. Perl is modular and there are thousands of modules available (E-Mail, XML, HTML, PDF, etc.)
You can do single-line commands through:
perl -e 'single-line command separated with semicolon'

To print Hello World! with Perl:
perl -e 'print "Hello World!\n"; print "How are you today?\n"'

A Perl script should have a shbang reference to the Perl location:

After the shbang reference, you can begin your scripting:
print "Hello World!\n";
print "How are you today?", "\n";

Notice that concatenation is done through the comma ",".

To execute a Perl script, there are two things that can be done. We can use the Perl interpreter:
perl test.pl

Or you can change the execute permission:
chmod u+x perl.pl

Note that the shbang reference is very important for directly executing the file or else the Bash shell would not know that it's a Perl script.

You can pass arguments (parameters) into a Perl script. Arguments are stored in the $ARGV[x] array. The first argument starts with $ARGV[0]:
print "Your first argument is $ARGV[0]"

To pass in arguments:
./test.pl Hello!

Arguments are delimited by spaces. The following example will have Hello in $ARGV[0] and World in $ARGV[1]:
/test.pl Hello World

Perl has the best support for REGEXP. To do a REGEXP match:
if ($ARGV[0] =~ //)
print "$ARGV[0]\n";

You can check for compilation errors through:
perl -c test.pl

Friday, February 26, 2010

AutoIt 2

For my example, I am going to create an auto-downloader for the Nokia Music service. To do this, I will need to send Enter once and Tab 5 times for a number of repetitions, with a delay between each repetition.
At the end of this article, you'll learn learn how to set focus to different windows, how to check for the window title, and how to send keystrokes to an application. You'll also learn the application and syntax of timers, for-loops and if-conditions.

Begin by creating a basic skeleton:
#include <GUIConstantsEx.au3>
#include <Timers.au3>

Dim $timerCountDown, $timerWindowCheck, $timerSendKeystrokes, $countCurrent, $repetitionsCurrent

$gui=GUICreate("Nokia Music Automator",200,85)


GUICtrlCreateLabel("Delay (ms):",5,30,55)




While 1

Func guiClose()

Func helpClicked()
MsgBox(0,"Help","Instructions:"&@CRLF&@CRLF&"1) Start Nokia Ovi Player"&@CRLF&"2) Go to the Download Page"&@CRLF&"3) Set the Necessary Parameters"&@CRLF&"4) Click on Start"&@CRLF&"5) Highlight the Download button"&@CRLF&@CRLF&"Note: To highlight the Download button - Hold your left mouse button on it, move the pointer away, then release. You should see a dotted box around it."&@CRLF&@CRLF&"By Syraxius (http://basicdraft.blogspot.com/)")

Func startClicked()

Func start()

Func stop()

Func timerCountDown($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime

Func timerWindowCheck($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime

Func timerSendKeystrokes($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime

Notice that all the timer functions have 4 parameters and makes use of the #forceref directive. This is required for any functions used by timers.

When start is clicked, window should be focused to Nokia Ovi Player and the timerCountDown should be started:
WinActivate("Nokia Ovi Player")

In the timerCountDown function, it should increment $countCurrent by 1 each time, and make a beep:
If ($countCurrent<5) Then

Notice that I set the timer again. This is because the setting of the timer in the startClicked handler has the time set to 0 so that the method will be called immediately. The next calls will be 1s each.

Once the method is run 5 times, all timers are killed in stop(), and then the process is started with start().

In stop(), simply destroy all timers in $gui:

In start(), set repetitionCount to 0 and start the timerWindowCheck and timerSendKeystrokes:

In timerWindowCheck(), check for the active window and if it's not "Nokia Ovi Player", stop all processes and display a message:
If Not (WinGetTitle("[active]")="Nokia Ovi Player") Then
MsgBox(16,"Halted","The downloading process has been halted because Nokia Ovi Player lost focus!")

Notice that I used 16 for the flag. This is to make it appear as an error. Refer to here for flag codes.

In timerSendKeystroke(), check if $repetitionCurrent is larger than the value in $repetition. If it's not, continue on with the automation. If it is, stop all processes and display a message:
_Timer_SetTimer($gui,GUICtrlRead($delay),"timerSendKeystrokes", $timerSendKeystrokes)
If ($repetitionsCurrent<GUICtrlRead($repetitions)) Then
MsgBox(64,"Completed","The downloading process has completed!")

You should end up with this code:
#include <GUIConstantsEx.au3>
#include <Timers.au3>

Dim $timerCountDown, $timerWindowCheck, $timerSendKeystrokes, $countCurrent, $repetitionsCurrent

$gui=GUICreate("Nokia Music Automator",200,85)


GUICtrlCreateLabel("Delay (ms):",5,30,55)




While 1

Func guiClose()

Func helpClicked()
MsgBox(0,"Help","Instructions:"&@CRLF&@CRLF&"1) Start Nokia Ovi Player"&@CRLF&"2) Go to the Download Page"&@CRLF&"3) Set the Necessary Parameters"&@CRLF&"4) Click on Start"&@CRLF&"5) Highlight the Download button"&@CRLF&@CRLF&"Note: To highlight the Download button - Hold your left mouse button on it, move the pointer away, then release. You should see a dotted box around it."&@CRLF&@CRLF&"By Syraxius (http://basicdraft.blogspot.com/)")

Func startClicked()
WinActivate("Nokia Ovi Player")

Func start()

Func stop()

Func timerCountDown($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime
If ($countCurrent<5) Then

Func timerWindowCheck($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime
If Not (WinGetTitle("[active]")="Nokia Ovi Player") Then
MsgBox(16,"Halted","The downloading process has been halted because Nokia Ovi Player lost focus!")

Func timerSendKeystrokes($hWnd, $Msg, $iIDTimer, $dwTime)
#forceref $hWnd, $Msg, $iIDTimer, $dwTime
_Timer_SetTimer($gui,GUICtrlRead($delay),"timerSendKeystrokes", $timerSendKeystrokes)
If ($repetitionsCurrent<GUICtrlRead($repetitions)) Then
MsgBox(64,"Completed","The downloading process has completed!")

AutoIt 1

There are many times where you need to do automation. Tasks like farming in games, or downloading large batches of files are always repetitive and boring. Thus comes the need for automation.
We will now look at AutoIt today. AutoIt is a scripting engine that features a language similar to BASIC. AutoIt allows creation of GUI driven interface. You can run scripts on the fly on systems with AutoIt installed, or create executables (*.exe) to be run elsewhere.

We will begin by creating a basic script that shows a message box. AutoIt syntax is very similar to BASIC, and the logic is as simple as C. To show a message box, create a blank script and type:
MsgBox(0,"Hello World!","How are you today?")

There is no need for a semicolon to terminate. Semicolons are comments (similar to // in C).

If you see a pop-up, you've successfully created your first AutoIt program. We'll begin constructing the GUI. The GUI is very simple, and the buttons are event driven. We'll deal with the events later on.

Before we deal with the advanced stuff, let's create a basic GUI that has a label, a textfield, and a button. Delete the message box first. We then import the GUI library:
#include <GUIConstantsEx.au3>

Next, we create the main panel:
GUICreate("GUI Example",200,100)

We then populate the GUI with buttons and a field:
GUICtrlCreateLabel("User Input:",5,5,80)
$input = GUICtrlCreateInput("",100,5,95)

$button = GUICtrlCreateButton("Read",5,30,190)

Notice that I assigned the creation of the input and button into their respective variables. This is so that we can do referencing to it later on.

Then we'll make the GUI visible:

If you try running this right now, you will see nothing. This is because when the program is executed to the bottom, it automatically exits. We want to pause this for a while to take a look at our preliminary GUI design, so we'll do a delay through:

The whole code will look like this:
#include <GUIConstantsEx.au3>

GUICreate("GUI Example",200,60)

GUICtrlCreateLabel("User Input:",5,5,80)
$input = GUICtrlCreateInput("",100,5,95)

$button = GUICtrlCreateButton("Read",5,30,190)



Right now our window will appear for 5 seconds then disappear. Looks pretty good right now, but the buttons do nothing at all! Right now, we'll start adding in the events.

We'll need to allow events to occur:

This is typically put on the top of the program right after the include.

Next, we'll create a function to exit. This will be our event handler:
Func guiClose()

We then assign this to the GUI's event listener:

I typically put this right below the GUI creation line. $GUI_EVENT_CLOSE is the event triggered when you press the close button. Right now the program exits when you click the close button, but it still closes after 5 seconds. We fix this by replacing the Sleep(5000) with an endless loop:
While (1)

Your code should look like this now:
#include <GUIConstantsEx.au3>
GUICreate("GUI Example",200,60)

GUICtrlCreateLabel("User Input:",5,5,80)
$input = GUICtrlCreateInput("",100,5,95)

$button = GUICtrlCreateButton("Read",5,30,190)


While (1)

Func guiClose()

Right now the close button works, but not the Read button. What do we do now? Of course, we create an event handler for the read button as well. I'll do an empty one first:
Func readClicked()

Then we assign it to the listener (type this right below the button creation):

You can also assign it to the input's handler so that when the user hits ENTER, it will call the handler:

Right now the button calls the handler, but it's empty. We'll start filling in the handler. Remember the message box command we used before? We'll use it here:
MsgBox(0,"Read","User Input is """&GUICtrlRead($input)&"""")

Let's break down what's written in there. To print double quotes, we must type them in a pair like this "". Next, the value inside $input is concatenated with the sentence. Finally, a double quote is concatenated at the end of the line.

The final completed code should look like:
#include <GUIConstantsEx.au3>
GUICreate("GUI Example",200,60)

GUICtrlCreateLabel("User Input:",5,5,80)
$input = GUICtrlCreateInput("",100,5,95)

$button = GUICtrlCreateButton("Read",5,30,190)


While (1)

Func guiClose()

Func readClicked()
MsgBox(0,"Read","User Input is """&GUICtrlRead($input)&"""")

When you run the program, you should be able to see exactly what you typed into the user input field when you press Read.

Thursday, February 25, 2010

Debian 9

GREP is for filtering. AWK can do filtering and can single out columns. SED is for search and replace. SED is known as a stream editor. SED operates on data received via STDIN ("<"), Pipe or a file. SED is used for transforming text, and is primarily used in search and replace applications although it has other functionalities. Its pattern matching is similar to GREP and AWK, and it works with POSIX and EREGEXP. SED is pattern/procedure based.
The general syntax of SED is:
sed [options] 'instruction' file

When used through pipe, you don't need to specify the file. sed can be run interactively or non-interactively (scripted). Suppose we are using the same file again:
Debian Linux
SUSE Linux
SUSE Linux 9999
Debian9 Linux
Linux SUSE 10

To print the first line from a text file:
sed -n '1p' test.txt

1p is an instruction which means print the first line. To print the last line of a file, use:
sed -n '$p' test.txt

To automatically pass the most recently used parameters into a certain program, use:

This works for every program.

Use -n to suppress lines that sed has parsed, and ALWAYS use 'p' to print affected lines.

You can also use sed to print a range of lines like this:
sed -n '4,$p' test.txt

This would print from line 4 to the last line. You can also use REGEXP in sed, like this:
sed -n -e '/SUSE/p'

In REGEXP, . means 1 character. If you want to search for the last line, you can use quantifiers like:
sed -n -e '/.* SUSE [0-9]\+/p' test.txt

You can also search for multiple patterns like:
sed -n -e '^SUSE/,/^Debian/p' test.txt

Sed is able to delete lines on input stream on output. Sed does not overwrite source files unless '-i.bak' is used. sed sends all output to STDOUT unless redirection is used. To delete lines from input stream on output:
sed -e '/SUSE/Id' test.txt > test2.txt

"I" is used to specify case insensitivity, and "d" is for delete. You can also use sed to remove ranges of lines:
sed '1,3d' test.txt > test2.txt

You can also use sed to remove blank lines:
sed -e '/^$/d' test.txt > test2.txt

Sed can also be used to replace text. We will now replace blank lines with the word "Blank". The syntax is 's/find/replace/'. We can do this by:
sed -e 's/^$/Blank Line/' test.txt > test2.txt

To search for SUSE and replace it with Novell SUSE:
sed -e 's/SUSE/Novell &/I' test.txt > test2.txt

& is equal to the search term. By default, only the first match per line will be replaced. To make multiple replacements per line, we will need to use the global option:
sed -e 's/SUSE/Novell &/Ig' test.txt > test2.txt

"g" parameter will allow global line replacements.

Debian 8

Awk is a field/column processor and it provides reports. Awk tokenizes fields/columns based on defined delimiter (space by default). Awk performs procedures on lines based on optional matched patterns. Awk automatically loops through lines of input. Awk supports the same input types as GREP. Awk supports different Input Delimiters and Output Delimiters. You can also use POSIX and Extended REGEXP with awk.
Awk consists of three steps:
BEGIN block (Optional) - executes prior to reading input stream
MAIN block - executes once per line of input (// Pattern matching is also part of this)
END block - executes once after reading input stream

We will now look at awk using a similar file used in GREP:
Debian Linux
SUSE Linux
SUSE Linux 9999
Debian9 Linux

To print full lines using awk:
awk '{print}' test.txt
cat test.txt | awk '{print}'

If you have a line like this:
SUSE Linux 9999

SUSE is referenced by $1, Linux is $2, and 9999 is $3.

To print only column 1 for the whole file:
awk '{print $1}' test.txt

If you print $2, the lines that doesn't have $2 will be printed as blank lines.

You can print more columns using the syntax:
awk '{print $1,$2}' test.txt

You can also swap the lines if you wish:
awk '{print $2,$1}' test.txt

To search for specific patterns:
awk '/SUSE/{print}' test.txt

Similarly, you can print specific columns:
awk '/SUSE/{print $1}' test.txt

If you want to use different input (FS) and out (OFS) delimiters, such as whitespace for input and colon ":" for output, use:
awk 'BEGIN{OFS=":"}{print}' test.txt

FS stands for field separator.

To use pattern matching together with different delimiters:
awk 'BEGIN{OFS=":"}/SUSE/{print}' test.txt

An end block is typically used to show that a process has been completed:
awk 'BEGIN{OFS=":"}/SUSE/{print}END{print "Process Complete"}' test.txt

Awk also has variables which you can print like this:
awk 'BEGIN{OFS=":"; print "Output Field Separator is \""OFS"\""}/SUSE/{print}END{print "Parsed "NR" lines"}' test.txt

We will now parse /var/log/messages for a real-life example. A line in messages look like this:
Feb 22 22:32:29 kelvin-debian01 kernel: [ 24.826668] NFSD: starting 90-second grace period

We want to show only the status, which is "NFSD: starting 90-second grace period". To do this, we first have to find out what column is NFSD. It is the 8th column. So we want to print from the 8th column onwards. This is the code to do this:
awk '/Feb 22/
 for (i=8;i<=NF;i++)
  printf("%s ",$i);
}' $1

NF contains the number of columns in one line. We want to print from the 8th column to the end of the line so we print from 8 till NF. After printing each line, we send a CR/LF. This effectively returns only the status messages with no other information.

Wednesday, February 24, 2010

Debian 7

GREP is a Line Processor. Its basic use is to parse lines and returns full lines with matching characters. GREP can parse through streams of data, including logs and tags. GREP supports REGEXP (POSIX and EGREP).
GREP is an executable that can be run by any user in the system unless otherwise configured by an administrator. GREP supports streams of data from STDIN ("<"), File (one or more) and Pipe.

An open file in a text editor is called a buffer. Suppose we have a text file, test.txt, with the following content:
Debian Linux
SUSE Linux
SUDE Linux 9999
Debian9 Linux

Right now, we parse it for specific lines. For example, to find the word "Debian" (case-sensitive by default), use:
grep "Debian" test.txt

We will only get the first three lines, since the search was case sensitive. To turn off case-sensitivity, use the -i parameter.

To do an "exclude" search, use the -v parameter. Combining it with -i (resulting in -vi), GREP will return only the SUSE Linux line.

GREP can also perform numeric searches. The following command will search for "2010" in a line regardless of its position:
grep "2010" test.txt

If we want to see both Debian2010 and SUSE Linux 9999, we need to search for a character class. This is a REGEXP convention. Character classes are specified using square brackets. For example, we want to match any line with characters 0 through 9:
grep "[0-9] test.txt

We can use REGEXP to search for lines beginning and ending with certain characters. Use the caret (^) to anchor to the beginning of the line, and the dollar sign ($) to anchor to the end of the line. For example, to search for anything beginning and ending with "Debian":
grep -i "^Debian$" test.txt

This will only return lines:

Character classes can be inserted into the middle of the search, like:
grep -i "^Debian[0-9]$" test.txt

This will only return Debian9.

REGEXP also supports quantifiers. For example, * searches for 0 or more characters. ? searches for 0 or 1 characters. + searches for 1 or more characters. When using quantifiers, we need to invoke the egrep command. To include Debian2010 into the search, use:
egrep -i "^Debian[0-9]+$" test.txt

We will now try to parse real log files from the /var/log. We will be using the messages file. messages is owned by root, so we will need to use the "su" command:

If you use su (switch user) without typing any account name, it goes into root by default.

If we want to search for the word "ERROR" in messages, type:
grep -i "ERROR" messages

If we want to see how many lines were returned, we can pipe it into wc:
grep -i "ERROR" messages | wc

Another way we want to use GREP is to pipe the output into tail. This shows the last 5 logged lines with "ERROR":
grep -i "ERROR" messages | tail -n 5

Debian 6

Tar, Gzip, Bzip2 and Zip allows compression and ability to roll files and directories into one file. Gzip handles the compression and decompression of Gzip (*.gz) files.
To compress a file, use the following syntax:
gzip -c test.txt > test.gz

By default, if we use -c, it will generate large amounts of output to the screen (STDOUT). You will need to use output redirection as shown to output to a file. If you accidentally miss out the output redirection, you will likely have your prompt corrupted. To fix all corrupted prompts, type:

Gzip has a built-in function to reveal compression statistics. To do this, use the -l parameter:
gzip -l test.gz

Right now we have a test.gz file. We want to unzip this file, so we use the Gunzip utility:
gunzip test.gz

When you gunzip something, it removes the compressed file by default.

Another utility that decompresses gzip files is zcat which prints to STDOUT:
zcat test.gz

You can also pass the -d parameter into gzip, which is similar to gunzip.

Bzip2 is used to handle Bzip2 (*.bz2) files. Bzip2 works better than Gzip for larger files. Usage of Bzip2 is very similar to Gzip (same paramters, etc). Bzip2, however, cannot show statistics about a file.

You can use the "file" command to show a file's type:
file test.txt.bz2

The zcat equivalent in Bzip2 is bzcat. The gunzip equivalent is bunzip.

Less has a built-in decompressor so it can read compressed files in plain text.

When you see .tar.gz, it means that it's a tape archive compressed with gzip.

One of the oldest compression programs for Linux is the zip and unzip. The syntax for zip is:
zip target source

For unzip, it's:
unzip source

unzip supports the -l option to show statistics. zcat works on zip files as well.

Tar (which stands for Tape Archive) allows to role one or more files and/or directories into one file. Tar can be compressed or uncompressed. To create a Tar file, use:
tar -cvf output input [input2] [input3] ...

To display the contents of the tar archive:
tar -tvf file.tar

Tar can use the Gzip algorithm through:
tar -czvf output input [input2] [input3] ...

Typically the file has the extension .tar.gz. Swapping the z to a j will result in Bzip2 being used. To decompress, use:
tar -xjvf input output

Debian 5

Basic Linux commands are typically universal over all Linux distributions. In GNOME's terminal, you can use CTRL+T to open a new tab. The default shell in Debian and most other Linux distributions is the BASH shell.
The BASH shell has 2 prompts: $ for non-privileged (non-root), # for privileged (root). An example of a non-privileged prompt is:

To find out who you are for a given shell, use the command:

tty stands for Teletype. To find out what terminal are you currently logged on to, type:

When using GNOME, it automatically creates a pseudo terminal. An example pseudo terminal will look like /dev/pts/0. The number counts increntally by 1.

To reveal the currently connected users, their ttys and programs, use:

The w command also tells us what's last run in a particular tty.

A command similar to w but a little simpler is:

To find out where we are in the directory tree relative to '/', use:

pwd prints the working directory. The '/' is known as the absolute root. pwd is actually retrieved from an environment variable '$PWD'. '$PWD' is updated as we navigate throughout the directory tree. BASH updates the '$PWD' variable. There is also another variable maintained called '$OLDPWD' which stores where we were before we were in '$PWD'.

To change directories, use:

cd can accept relative paths or absolute paths. Relative paths are paths relative to the current folder. Absolute paths begin with '/' and is the absolute path with respect to the absolute root. Absolute path behaves the same no matter where you are in the directory tree, while relative depends on where you are. Executing cd with no parameters or cd ~ takes you back to the home directory.

To create file or update a file's timestamp information, use:

You can use "touch test.txt" to create a blank file, or touch followed by an existing filename to update its timestamp. Zero-byte files are required by some scripting applications.

The equivalent of dir in Linux is:

The parameter -l does LONG listing, which includes permissions, links, owner, size, modified timestamp, and file/directory name in the output. The -a option lists files that begin with a ".". Files that begin with "." are hidden files. You can also do sorting with ls. For example, the -ltr would sort by modified time. -t means to sort by time, and -r is to reverse the order.

To find files in the current directory with ".txt" in the filename, use:
find ./ -name '*.txt'

Find also finds recursively into the folder's sub-directories.

echo prints text onto the console. It can also allow you to do output redirection into a file like this:
echo "Hello world!" > helloWorld.txt

With echo, you can also print the contents of a variable. For example, $? stores the exit status of the most recent command. To see its contents:
echo $?

A 0 return typically means that an operation has succeeded.

cat is for concatenation. It concatenates the contents of text files. If a single file name is specified, it will print it on the screen:
cat test.txt

However, we can also use cat to print multiple files. We can also do output redirection into a new file, like this:
cat test.txt test2.txt > test3.txt

touch is used to create blank files. To create directories, use:
mkdir temp

To delete directories, use:
rmdir temp

Directories must be empty before it can be removed.

To display all defined aliases, use:

The same command can also be used to create your own aliases:
alias dir='ls -l'

Now, when you use dir, it will execute the LONG listing. Aliases are only available for the current tty or pts you're in.

To see a list of commands you've used, run:

By default, it stores the past 1000 commands. The history is stored sequentially in .bash_history in the home directory.

If you use history, you can see the command numbers. To run the 50th command, we can use:

! is called "shbang" in Linux.

At times a user may want to remove all program output from the screen. We can do this using:

To remove files, use:
rm test.txt

rm can also remove directories. Directories in Linux are considered special files. You can recursively remove directories and all its contents using the "-rf" parameter.

To copy a file from source to destination, use:
cp source destination

cp by default has no output. To make it output its operations, use the "-v" parameter which stands for verbose.

To display a file's path in the $PATH variable (if it exists), use:
which filename

filename can be replaced with a program name. For example, if we run "which ls", we will see "/bin/ls".

When ls is run in auto-color mode, directories will be blue and applications will be green.

To move or rename an item, use:
mv source destination

To rename test.txt to testing.txt, you can use "mv test.txt testing.txt". Moving or rename a file will not change the timestamp unless you cross file systems.

BASH supports command-chaining through the ; delimeter. An example of command-chaining is:
ls -l; mv -v test.txt testing.txt; echo 'Exit Status: $?'

BASH also supports logical AND and OR. An example application is:
clear && ls -l && mv -v test.txt testing.txt && echo 'Exit Status: $?'

The above means that the clear must be successful before the ls -l will be executed, and the ls -l must be successful before the mv will be executed.

Logical OR in BASH is different from in programming. OR runs through the list till a command works. For example:
echo Fish || echo Toilet

This will only show "Fish". However, if we modify it to "echotoot Fish", echotoot will fail, and "Toilet" will be shown.

If a command's output is too long, pipe it to more or less, like this:
ls -l /var/log | more

This process is known as pagination. "less" is newer than more in that it provides the ability to scroll back up. In "less", use "f" or SPACE to go forward, or "b" to go back. Use "q" in more or less to quit. Piping sends the output of one process to the input stream of another. You can do multiple piping.

Ouput redirection is invoked using the ">" sign which redirects the output of a process into a file. To append to a file, use ">>". It will add on to an existing file, or create a new file if the file does not exist.

If you only want to see the top or the bottom of a file, you can use "head" and "tail" commands like this:
head -n 5 test.txt

The above command returns the top 5 lines of a file. Change head to tail and it will return the last 5 lines. By default, if -n is not specified, it returns 10 or 15 lines depending on the Linux distribution.

You see the file's MIME type, use:
file test.txt

To do a word count or line count from a text file, use:
wc -l test.txt

If the -l is not passed in, it returns the number of line, words, and characters respectively.

To quickly generate a sequence of numbers, use:
seq 1 10
seq 10

Both syntax will return:

Sequence is typically used to drive BASH loops.

Outputting to STDOUT (the console screen) is typically much slower than outputting to a text file.

To check the disk utilization throughout a directory structure, use:
du -ch

Without the -ch, the output would be non-human-readable.

Monday, February 22, 2010

Debian 4

The GNOME is the default graphical user interface supplied with Debian. Similar to Windows, we have a Desktop after we log in. GNOME provides multiple desktops, which can be accessed from the lower right hand corner of the lower panel. The icons in the multiple desktop panel will update itself according to how it really looks.
Speaking of lower panel: We have two panels, the upper and lower panels. The upper panel is mostly for shortcuts and other menus, while the lower panel is typically used for window tabs. Panels are not fixed: You can rearrange panels by right-clicking them and clicking properties. In GNOME, you can do transparency effects.

You can also add new applications to the panel by right-clicking it and selecting "Add to panel". Things like dictionaries, clocks, battery charge and network monitors can be added to panels. You can update Debian through the system panel on the top-right corner. The update application allows you to automatically update the kernel. There is also a small icon on the top-right corner which allows you to select windows through a pop-up menu.

GNOME automatically mounts any media inserted. Mounted devices can either be accessed from the desktop or from Computer. The file system can be accessed from Computer as well, which brings us to the '/' folder. By default, the Nautilus browser opens a new window every time you go into a new window. To fix this, go to Edit > Preference > Behavior and choose to use the File Browser.

Shortcuts in GNOME are called Launchers. When we open a launcher file, we can see the syntax for writing a shortcut.

GNOME is a Window Manager and Desktop Environment for system users. GNOME stores individual settings beneath the ~/Desktop. ~ translates to the /home/ directory. ~ is the "tilde", which is pronounced as the "til-da".

Debian 3

When Debian boots up, it follows a few predefined steps before control is given to the user. The first thing it runs, is BIOS, which stands for Basic Input/Output System. It initiates the POST (Power On Self-Test) which checks for hardware (RAM, CPU, Storage, Keyboard, Removable Storage, etc). Each computer architecture has their own hardware check routine (which may not be called BIOS).
After the hardware is checked, BIOS initiates the MBR on the default storage device. Recall that GRUB is installed in the MBR. GRUB stands for Grand Unified Boot Loader, which is designed to take over LILO. GRUB allows to boot into Multi-User or Single-User modes. Multi-User is the default, while Single-User is typically used for administrative purposes.

GRUB also allows interaction with the boot process. You can pass parameters into the kernel from the GRUB menu. GRUB is also used by SUSE and RedHat. GRUB then initiates the INITRD (Initial RAM Disk) which loads key drivers to support storage and other sub-systems. The MBR can only contain 512 bytes of data, so GRUB is split into multiple phases. The 512 byte code is the bootstrap, which loads the other components stored elsewhere.

The RAM Disk is temporary. After complete initiation of drivers, the kernel would have access to the '/' root file system. At this point the INIT process takes over, which sets up the system to the desired run-level. The run-level consists of killing and starting several services (klogd, syslogd, network services and other applications). INIT has the PID of 1, and is the parent of all processes. INIT is called by Scheduler, which has the PID of 0.

To view processes, you can use:
ps -aux
ps -ef

The above-described process is shared by most Linux systems. The main steps are: BIOS > GRUB > INITRD > INIT.

The Linux kernel is stored in /boot/. In the same folder, you'll also find the initrd.img image and the initrd application. In /boot/, we can also find the ./grub/ folder which houses various menu items and scripts. In there, you can find the menu.lst which stores information similar to what you see in Windows bcedit.

In menu.lst, and many other parts of Linux, you will see hard disks specified in the form (hdx,y). x stands for the hard drive number, which starts from 0. y stands for the partition number, which also starts from 0. From menu.lst, you can also see the kernel and the initrd image that is used. You can see that the "single" parameter is passed into the kernel to boot into Single-User mode.

The files in /boot/grub/ shows the process of booting from GRUB. stage1 is exactly 512 bytes, which is the 512 bytes in the MBR. stage1_5 initializes the filesystem. Do note that there is no e3fs for stage1_5, because e3fs is essentially e2fs with journaling turned on. stage2 is considerably larger, which continues the bootstrap process.

In /boot/, there is a config file. If we "less" the config file, we can see the kernel options enabled during compilation. When we do a ls -l, we can see that vmlinuz and initrd are both linked from the '/' of the file system. This is to allow the boot loader to find the file initially.

When the INIT runs, it looks up a table known as the inittab. inittab is found in /etc/. When we "less" inittab, you can see that the default runlevle is 2. From inittab, we can also see that the rcS script in /etc/init.d/ is always executed regardless of the runlevel.

Runlevel 0 is runs scripts to shut down services. This is known as the halt runlevel. Runlevel 1 is for Single-User, which doesn't have networking for other users to connect. Runlevel 2-5 is for Multi-User mode. Runlevel 6 is for rebooting.

When CTRL+ALT+DEL is pressed, the system is rebooted. This is done by calling:
/sbin/shutdown -t1 -a -r now

INIT is also responsible for respawning ttys when they fail.

Sunday, February 21, 2010

Debian 2

debian-installer is the installer used for installing Debian. The language selected during debian-installer is not the system language. It is simply for the installation process. However, the country list given on the next page would be based on the language selected.
The netinst image provides minimal packages. It is the base install. The extra packages will be downloaded from internet mirrors during the later parts of installation.

During installation, you would be given the choice to partition the disk. LVM stands for Logical Volume Management. LVM is required to configure RAID. It is not a good idea for all files to be on the same partition for a server. It is the most ideal to give a partition for each folder (home, usr, var, tmp) as suggested by the installer.

Ext3 is the newest available file system available on Linux. Ext3 is a journaling file system. Partition is the basic allocation unit for hard drives. File systems sit on top of partitions to provide features for users.

A journaling file system is a file system that keeps track of the changes it is about to make in a journal. Should a system crash occur during writing, it is easier to read from the journal and undo the damage. A journaling file system is less likely to become corrupted. The NTFS used in Windows is also a journaling file system, unlike its predecessor, the FAT file system.

For my purpose I am going to leave the system to use only the standard system. This will allow practice on installation and activation of basic services. After the packages are properly configured, GRUB boot loader will be installed on the hard disk. GRUB will be installed in the master boot record.

During boot, you will see two or more choices depending on whether your system is clean. Single-user mode is for emergency purposes where the administrator backs up or recovers a system.

Once the system is properly booted, you will be brought to the tty1 login. You can log in as either root or the non-privileged user from here. We can start by checking the version of Debian with the following command:
cat /etc/debian_version

We can now check the various file systems in place from the root folder (\) using:
df -h

You can check for any listeners using the following commands:
netstat -ntl
netstat -nulp

-ntl will show TCP listeners, while -nulp will show UDP listeners. From a base installation you will only see the system listening to UDP port 68 which is for DHCP.

To check the system's network status, use the following command:

You can check the nameservers configured using:
cat /etc/resolv.conf

We can reveal the IP addresses returned by the DNS using:
dig www.google.com

To find out what programs are installed, use:
dpkg -l | more

To do a filter of the output, use the grep pipe. For example, to show lines with "ssh" in it.:
dpkg -l | grep ssh

Multiple ttys are configured, so you can do a CTRL+ALT+F# where # is the tty number you wish to go. tty7 is for GUI, which isn't installed.

We can check the tty consoles configured using:
cat /etc/inittab

Debian 1

Debian is a fully open-source Linux OS based on 2.6.x kernel (Lenny is running on 2.6.26). Debian features over 18000 packages as estimated in 2009. Debian also features support 11 hardware architectures (i386, ia64, amd64, PowerPC, Sparc, MIPS, S390, etc.). Multi-architecture ISO images are also available.
Debian also supports graphical and text-based installation. PXE will run with text-based installations. Debian is also downloadable from various protocols including HTTP, Bit Torrent, Jigdo. Debian is also installable from CDs, DVDs, USB Sticks, netinst, PXE, etc. Packages supported include GNOME, KDE, OpenOffice.org, Aptitude, Disk Encryption.

To run Debian efficiently, we need a 1GHz P4 or equivalent processor with 256MB of RAM and 1GB of free storage. Recommended requirements are 512MB of RAM and 5GB of free storage. Debian can coexist with other operating systems, i.e. Windows.

Debian can be downloaded from debian.org > Getting Debian. You can both download or buy Debian CD/DVD sets. Business card ISOs are for business card sized CD/DVDs. It is recommended to download Debian during off-peak hours.
That should be the end of the Windows 2008 R2 series. I'll move into Linux next. Yes you heard it right: Linux - my biggest fear. I'll do a full review on Debian, and I think my research on Linux is going to be very, very long. As of now, I have just about 20 full-fledged articles, and I predict I'll have 40 articles by the end of Linux (if, it'll ever end). Hopefully I can spend more time typing in <code> tags instead of taking screenshots. Nevertheless thank you Imageshack for the great hosting services. I'll make a donation once I get my netbook.

Windows Server 2008 R2 11

DirectAccess is the ability to have VPN Access without having VPN hardware. DirectAccess is a solution native to 2008 R2 that doesn't require extra VPN hardware (routers, etc).
We can set up a user machine so that users can get access to their corporate files over standard internet connections. Through DirectAccess, administrators can also push GPO updates to machines that are on the move without requiring them to be on-premises.

Components required for DirectAccess is a Windows 2008 R2 machine for use as a DirectAccess Server. The mobile clients also need to be Windows 7 or above. You need at least one Domain Controller and a DNS. The Domain Controller/DNS must be 2008 SP2 and above. Finally, we need a PKI infrastructure with a NAP (Network Access Protection) server. DirectAccess makes use of IPSec and Teredo (IPv6 Infrastructure).

Setting up of DirectAccess is extremely complex. DirectAccess also connects through port 443, which is HTTPS. This means that you will be able to do DirectAccess through any firewall that allows 443 traffic (literally every firewall).

The walkthrough for DirectAccess can be found here.

Windows Server 2008 R2 10

BranchCache helps branch offices from bleeding bandwidth. When a branch user requests for a file from the headquarters, the file is downloaded. If three users request for the same file, it is downloaded thrice over the WAN. This is wasted bandwidth.
BranchCache Co-op allows to cache the files IN THE CLIENT MACHINE, so the file is downloaded over the WAN once and then copied to any other machines requesting for that file. This is different from the old method of putting the files in the proxy.

BranchCache Hosted allows an R2 machine to cache files. In this case, the client sends the file over to the File Server. Other clients would then download from the R2 machine. The co-op method is recommended for 15 users or less. Solid reliability and availability is achieved with Hosted method.

BranchCache is typically installed on the File Server or the Web Server. After installing BranchCache, we enable BranchCache on clients using GPO. BranchCache can be enabled through the registry as well.

BranchCache Hosted requires a shared location that is available for all participating users. BranchCache also must be allowed through the firewalls. The co-op version is easier to implement.

BranchCache is a role services which is installed with the file services.

A restart is not necessary. After installing BranchCache, we need to now enable it through GPO. The fastest way to modify Local Computer Policy is to go to mmc, then add the Local Computer Policy Snap-in.

The first policy we need to change is the Lanman Server. We can go there through Computer Configuration > Administrative Templates > Network > Lanman Server. We need to enable the Hash Publication for BranchCache GPO.

In the same folder, we have BranchCache. BranchCache settings are for clients, so we don't need to change anything in the server. We now need to begin configuring the client machines. We do this now through GPOs.

In the GPO, browse to the same folder as the Lanman Server above. Now we'll turn on BranchCache. Since we're using co-op mode, we'll also turn on the Distributed Cache mode.

We can specify which kind of files should be cached from BranchCache for Network Files GPO setting. You can also specify how much space is allowed for BranchCache. These two are optional.

To add the exception, browse to Computer Configuration > Policies > Windows Settings > Security Settings > Windows Firewall with Advanced Security > Windows Firewall with Advanced Security > Inbound Rules.

We'll add a predefined rule which works with Windows 7 only. We need to add both BranchCache - Content Retrieval and BranchCache - Peer Discovery.

Finally we need to specify which folder should be cached. Go to the properties of the folder from File Services in Server Manager, and enable it like this:

Windows Server 2008 R2 9

AppLocker in Windows 7 and 2008 R2 lets you decide who can run what applications on the desktops. AppLocker can be now applied in the GPO settings. AppLocker can be found in Computer Configuration > Policies > Windows Settings > Security Settings > Application Control Policies > AppLocker.
For AppLocker to work, we have to run the Application Identity service. This is not turned on by default, so we have to enforce it through GPO. To enforce this, browse to Computer Configuration > Preferences > Control Panel Settings > Services then add the AppIDSvc

AppLocker allows to create rules to decide who gets to run what. Before we do anything, we need to first create the default rules. If you don't do default rules, the users will be locked out of their computers.

We then create rules by using the Create Executable Rules wizard.

After we're done with our rules, we should then go over to the AppLocker page to enforce all rules.

A restricted program would appear like this on a client machine:

Windows Server R2 8

You can cluster multiple Hyper-V servers (the host machines). To build a Hyper-V cluster, you need two Hyper-V servers with identical hardware configurations, and a separate SCSI-3 compliant storage appliance (SAN/NAS Storage, iSCSI is a good option). On the storage you need two volumes, one volume for quorum and one for storage.
When one machine needs to go down for maintenance, the VM can be migrated to the other live machine. This means that the RAM of the machine can be moved over, without shutting it down.

There are two methods:
1) Quick Migration
The VM is first hibernated, moved (in terms of the service) to the other node, then brought back up. This was already available in Server 2008 R1. This downtime may not be good for the SLA.

2) Live Migration
The VM's RAM is copied over to the other node while the original machine is still running. There is no downtime. We don't have to know the details, it's all done automatically.

Typically to set up a Hyper-V cluster, you'll need a minimum of $20,000 worth of hardware. However, you can "hack" the server so you don't need to use a compliant iSCSI storage and two similar servers. However we do need a NAS storage appliance.

A "heartbeat" is like "hello" messages which components in a failover cluster uses to make sure that everyone is alive. This "heartbeat" needs to be sent out from a NIC dedicated to it.

Failover Clustering needs to be installed on both Hyper-V hosts. After installation, you would be able to run the Failover Cluster Manager.

To attach an iSCSI storage, we need to use the iSCSI initiator. The one in R2 has a Quickconnect tool which was missing in 2008.

After connecting to the disks, go to Volumes and Devices and click on Auto Configure to mount the volumes.

Note that we need to create iSCSI targets from the NAS before Windows will detect it. After adding them from the iSCSI, we would be able to access them from Disk Management. Failover clustering wizard requires that the disks be OFFLINE before it would proceed. The iSCSI initiator needs to be run from both systems (all the participating systems).

In the Failover Cluster Manager, there is a "Validate a Configuration Wizard" which would check if our servers are compatible.

If your storage doesn't support SCSI-3 persistent reservations, you will not pass the validation test. However, you can get around this.

Here's how we work around the problem. We first disconnect the two disks from the iSCSI initiator. After this, it should disappear from Disk Management. After this, we'll run the validation again. We'll be able to proceed with some warnings.

After this, we'll run the Create Cluster Wizard to create the actual cluster.

We'll then need to give the cluster an IP address. This is a virtual IP similar to HSRP. This is not load balancing in a sense that it does not distribute the requests.

There are four types of quorum configurations:
1) Node Majority
This is not recommended as it cannot sustain a failure of any of the nodes.

2) Node and Disk Majority
If the quorum disk is online, it can sustain failure of 1 node.

3) Node and File Share Majority
If the quorum disk is online, it can sustain failure of 1 node.

4) No Majority
Can sustain the failures of all nodes except 1.

To do this hack, we'll need to choose Node and File Share Majority, which uses a share location as a quorum disk. The quorum disk may also be referred to as a witness disk. We can get to the configuration by right-clicking the cluster from the Failover Cluster Manager.

In Server 2008 R2, we can create Hyper-V machines directly from the Failover Cluster Manager. We do this by going through the New Virtual Machine Wizard. This time, however, we create a disk in our system disk. This allows to proceed, but is not really functional in terms of high availability.

Saturday, February 20, 2010

Windows Server 2008 R2 7

Virtual Terminal has been renamed to Remote Desktop Services. We can now do personal desktops utilizing Hyper-V - similar to Citrix style OS. Microsoft is now moving back to thin-client computing.
To do this, we need to have two servers. The first server is the RDS server, while the second server is the Hyper-V server. The RDS server will now act as a authenticating gateway and relay for the Hyper-V server.

The advantage of this is that any computer that is XP SP3 and above can run things from the Hyper-V server that require more powerful hardware. This also allows greater administrative control and security.

You can also quickly reset the Hyper-V machines to snapshots. This is good for training environments or campus settings where machines must be restored to its original state everyday. You can also use web-access to distribute the virtual desktops wherever the user is.

VDI stands for Virtual Desktop Infrastructure. VDI has been used throughout computer history (recall dumb terminal and mainframe) but not as elegantly as we have it now.

We first create a RDS with Remote Desktop Session Host. The Session Host is the one that takes care of connectivity. The Session Host should also have the Connection Broker, the Gateway and the Web Access features installed.

Network Level Authentication is to enforce authentication before they reach the login screen. If NLA is disabled, they would be able to first Remote Desktop into the Session Host before being asked for credentials.

Licensing mode is usually left blank for home users. This would allow us 120 days of trial usage.

Next we would have to configure who is allowed to connect to the Session Host. It is a good idea to keep the Administrators group in the list. You can add both Groups or individual Users.

A new screen is the Client Experience. We can have a few choices here to choose how full a Windows 7 experience we would like for our clients. The choices can be seen in the following screenshot, which includes AV Playback, Audio Recording Redirection and Desktop Composition (Aero Interface). Audio Recording Redirection allows to record from the client which pushes the sound to the RDS machine.

SSL Certificates, as said before, requires a PKI to run fully. In a test environment without a PKI, it is recommended to use a self-signed certificate. RD Gateway needs a valid certificate to function properly.

Similar to users allowed to connect to a Session Host, you'll need to define those allowed to connect to the gateway.

TS_CAP and TS_RAP has now been renamed to RD CAP and RD RAP. Connection Authorization Policies (CAPs) allows us to decide which users can connect to the Gateway server. You can also use smart cards for authentication.

Resource Authorization Policies define what computers can be actually Remote Desktop'ed into. In a test environment, it is common to select "All computers".

NPS is Microsoft 2008's implementation of Radius. It was previously known as IAS. The usage is quite similar. You do not necessarily have to configure this for RD to work, but it's good practice to put good restrictions in place.

In the Hyper-V server, we would have to install Remote Desktop Services as well, but this time, we need to install the Remote Desktop Virtualization Host to allow Session Hosts to retrieve VM feeds from it.

We do have to do some configurations before RDS can work. Note that each Hyper-V machine can only be used by one user at one time. Note that we should name VMs with its exact FQDN to be able to make them available. We now need to prepare the VM before it can be used by users through RDS.

First, we need to make sure that the machine is set to allow remote desktop connections. We set it through My Computer > Properties > Advanced System Settings. From here, we need to also select the users allowed to remotely connect to this computer.

We now need to make use of the wmic (Windows Management Instrumentation Command-line) to add permissions for the Hyper-V machine:

wmic /node:localhost RDPERMISSIONS where TerminalName="RDP-Tcp" CALL AddAccount "google\machine$",1
wmic /node:localhost RDACCOUNT where "(TerminalName='RDP-Tcp' or TerminalName='Console') and AccountName='google\\machine$'" CALL ModifyPermissions 0,1
wmic /node:localhost RDACCOUNT where "(TerminalName='RDP-Tcp' or TerminalName='Console') and AccountName='google\\machine$'" CALL ModifyPermissions 2,1
wmic /node:localhost RDACCOUNT where "(TerminalName='RDP-Tcp' or TerminalName='Console') and AccountName='google\\machine$'" CALL ModifyPermissions 9,1
Net stop termservice
Net start termservice

Replaced the underlined with the Hyper-V machine's domain and name (the Hyper-V Host, not the VM). We now need to enable Remote RPC from the registry. From the VM, browse to:
HKLM/SYSTEM/CurrentControlSet/Control/Terminal Server

Look for the AllowRemoteRPC and set it to 1. After that, we need to add the exception into Firewall for Remote Service Management. We only need to allow Domain.

We're done with the client now. At this point, we'll need to configure the RD Connection Broker to work with Personal Desktops.

On the next page, we'll need to specify the Session Host. Since we're on the same system, we can simply type in this machine's FQDN. The same needs to be done for Web Access server.

After that, we can assign users to specific machines.

Right now there would be a problem with the Web Access as we're using a self-signed certificate. So now we'll have to add RD sources into the Web Access. To do this, we need to log into Web Access with a Domain Admin account.

Once we're in, we can specify the source. We want to use the RD Connection Broker as the source, so select the source and type in the machine's FQDN.

Windows Server 2008 R2 6

In Hyper-V 2008 R2, you don't have to shut down your system in order to add extra storage. This is known as Hyper-V Dynamic Storage. Back in Hyper-V 2008 R1, we were only able to use 4 processors. Hyper-V 2008 R2 now works with 8 processors with up to 74 logical processors.
You can also use up to 1TB of RAM. We can also do Hyper-V clustering now, which allows another system to take over the role of a failed one. Hyper-V clustering requires expensive hardware to work properly. We can now also support 384 virtual machines with up to 512 virtual processors.

We can also do Live Migration which allows migration of virtual machines across different processors from the same vendor. Do note that Dynamic Storage only allows adding of virtual SCSI hard disks. However, you'll need to boot up with a virtual IDE disk, so you cannot hot-add IDE disks.

To do hot-adding of SCSI, simply go to the Hyper-V management console from the Server Manager. We can then add SCSI hard drives from a VM's properties by going to the SCSI controller (which wasn't available in 2008 R1). After adding, you can go to Server Manager > Storage > Disk Management to initialize the new disk.

You can also remove and swap the disks to other virtual machines quickly from the same place. The disks are VHD files, which can also be mounted back to your host operating system. You can mount VHDs from anywhere - even from a thumbdrive.

Windows Server 2008 R2 5

You can boot a machine with a virtual hard disk in Windows Server 2008 R2 and Windows 7. Using VHD, you can test out new servers and OS without removing the existing operating system.
The VHD is a file, similar to VMWare's virtual disks. In Server 2008 R2, we can boot up from the VHD on the hard disk. You can also boot up a VHD from a USB drive.

To set up VHD, we need to use the bcdedit command to set it up. VHD files are created from HyperV. When you're done with the HyperV machine, turn it off and copy the VHD file.

You can also create a VHD from scratch from Server Manager > Storage > Disk Management > More Actions > Create VHD. If you create a VHD this way, you need to install an operating system on it.

Tip: Holding down the shift key while clicking on taskbar icons would open a new window. Similar to right-clicking it and launching it again.

To show all boot loader entries, we can simply type:

We need to clone the current boot loader configurations like this:
bcdedit /copy {current} /d "Server Core R2"

Replace "Server Core R2" with whatever name you want to see during boot. After creating the copy, you'll see the GUID assigned to it. Copy this GUID down.

We need to run bcdedit like this after modifying the underlined parts:
bcdedit /set {GUID} device vhd=[c:]\Folder\VHDFile.vhd
bcdedit /set {GUID} osdevice vhd=[c:]\Folder\VHDFile.vhd
bcdedit /set {GUID} detecthal on

Booting like this would clear out all existing configurations. You will have to set up the operating system from scratch again.

To delete an entry from the boot loader, we can use bcdedit again:
bcdedit /delete {GUID} /cleanup

FTP Introduction

Alright now I'm going to sidetrack a bit to show you how you can set up a small, lightweight FTP server using FileZilla Server. This guide will show you how to set up FTP in Passive mode (PASV), which is the new standard. I'll also show you tips on how you can make your servers available through the internet.
FTP stands for File Transfer Protocol. FTP control port runs on 21 by default, but we will have to change this later on. Port 21 is typically only for control messages, while another dynamically negotiated port (which you can specify) would be responsible for the actual data transfer going between the server and the client.

First of all, we need to set up our DynDNS connectivity. Consumer internet connections usually gives dynamic IP. This means that our IP will change from time to time, so we cannot consistently connect to the same machine through the same IP. DynDNS is a service that allows clients with dynamic IPs to use dynamic domain names which would point dynamically to what the client's current IP is. This gives us a more consistent online presence. To begin, we first get an account from the official website here.

After you've chosen a nice domain name, download an update client (either third-party or official would do) here. When you're done, go to ping tester sites like ping.eu to see if your DynDNS is updating properly. My router is WRT54G, so I can run the Tomato firmware which does DynDNS updates for me.

After the dynamic domain name is established, grab FileZilla Server here. FileZilla Server is only available for Windows as of writing, but there are many other free alternatives for other platforms.

First of all we would want to install FileZilla. Run the installation and go through with the installation process.

As you can see, I didn't want the Desktop icon. But whatever your preference is, make sure that you include the Administration interface because that's going to be how we configure the server later on!

I left the port as 14147. The port specified here refers to the control port (which is 21 by default). FileZilla can also be started as a service or an application console (similar to TomCat). I'm installing it as a service and I'll run it manually because I don't use it regularly, but you choose to start automatically if you want.

I certainly do not want the Administration Interface popping up every time I log in, so I turned this off. I think you'd want to do the same.

Start by connecting to yourself. Regardless of the port you chose, it will always listen to port 14147 on the local loopback interface ( After you're connected, we can begin Sharing Folders and adding Users. To do this, click on Edit > Users.

On the right side of the General section, click Add. After adding a User, tick on Password to set a password for the user.

After doing this, we can begin specifying what folders the user have access to from the Shared Folders section. This would work well for our scenario, but if you have multiple users with different shares, it's better to work with Groups.

From this point you can try FTP-ing to yourself through ftp://localhost/. However, what we want is to be able to access the server from somewhere else in the internet. So we'll begin setting up internet connectivity.

Go to Edit > Settings > Passive mode settings. Right here, we'll give it our DynDNS domain name, as well as a range of ports for data communication:

While you're here, it's also good practice to change the login banner. Do not put anything welcoming or revealing. The default banner tells any potential attacker of the platform you're using, which is definitely going to aid in the reconnaissance stage of a network attack. It is good practice to use something that implies that legal action will be taken, so that if for some reason you end up in court, you have something to use.

Type up a banner like:
This is a private FTP server! You are NOT allowed in here!
Your IP address has been logged for investigatory use!
All activities logged from this point will be used against you!
If you decide to proceed, you may face legal action!

Of course when you tell the attacker that you've logged their IP, you must really mean what you say. Simply go to the Logging section and turn on logging. 100KB is enough for typical scenarios.

That is all the configuration required for the server side. Now we'll have to forward the ports that we have. Remember, we have these ports:
FTP Control: 14147 (Default 21)
FTP Data: 14148, 14149, 14150

We now have to do port forwarding from the router. This process differs between routers, so check sites like http://portforward.com/ for instructions for your router. Mine looks like this:

It's difficult to test the system locally, so use sites like these to help you:
1) ping.eu
From this site, choose Port check and attempt to scan for the ports you have forwarded. If it's successful, you should see "open".

2) Gene6 Online FTP Test
Just plug in the parameters they require and wait for the success (or failure) message.

Right now basic connectivity is settled. We will set up Security from here. There are two modes of SSL: Explicit and Implicit. This guide will be setting up Explicit, which doesn't require a full PKI infrastructure.

To do this, we'll need to generate the necessary certificates. FileZilla can do this automatically so no worries here. Simply go to Edit > Settings > SSL / TLS Settings. Next, click on "Generate new certificate" to create a self-signed certificate.

After that you can proceed to enable SSL / TLS. To connect to the server in Explicit mode, we must connect with the ftpes:// protocol. For example, I would connect to myself with ftpes://syraxius.ath.cx:14147/. There is no need to forward anymore ports as Explicit mode listens to the default FTP port. (The port 990 you see in the configuration is for Implicit mode).

Do note that FileZilla is also included in a server suite known as XAMPP which you can obtain here.

XAMPP can also be put in a thumbdrive for portability. You can get the portable version here.

Windows Server 2008 R2 4

You can now build an ASP.NET application server on Server Core, which previously wasn't possible in 2008. Server Core is more secure, has a smaller footprint, and requires less hardware, but runs on CLI. The Server Core in 2008 was only usable as a web server, and nothing else.
Server Cores are typically managed through RSAT remotely. However, locally, you can use the following command to configure the server:

The basic settings such as setting up networking and joining the domain controller should be done first.

We can now enable management tools like PowerShell inside 4 > 2.

We then enable MMC Remote Management and Server Manager Remote Management, from the same menu. (SMRM requires PowerShell first)

We need to use Deployment Image Servicing and Management to deploy roles. dism is the same as ocsetup. We first install .NET Framework 2 and 3 for Server Core:
dism /online /enable-feature /featurename:NetFx2-ServerCore
dism /online /enable-feature /featurename:NetFx3-ServerCore

Then we install IIS:
dism /online /enable-feature /featurename:IIS-WebServerRole
dism /online /enable-feature /featurename:IIS-ISAPIFilter
dism /online /enable-feature /featurename:IIS-ISAPIExtensions
dism /online /enable-feature /featurename:IIS-NetFxExtensibility
dism /online /enable-feature /featurename:IIS-ASPNET
dism /online /enable-feature /featurename:IIS-ManagementService

Next, we install Activation Service and the Configuration API:
dism /online /enable-feature /featurename:WAS-WindowsActivationService
dism /online /enable-feature /featurename:WAS-ConfigurationAPI

Finally, we install the FTP extension for IIS:
dism /online /enable-feature /featurename:IIS-FTPServer
dism /online /enable-feature /featurename:IIS-FTPSVC
dism /online /enable-feature /featurename:IIS-FTPExtensibility

We need to allow the server to be managed through the Web Management Service:
Reg Add HKLM\Software\Microsoft\WebManagement\Server /V EnableRemoteManagement /T REG_DWORD /D 1

We finally we start the Web Management Service:
net start wmsvc

To manage the server, we have to connect through the Server Manager from another machine to start the ASP Core service.

After it's done, go into Web Server (IIS) role from another system and connect to the remote server:

We will now set up the FTP server. We start by adding the FTP site from IIS Manager.

Follow through the self-explanatory guide. In this case, we want to use the FTP server for remote web server management, so we use C:\inetpub\ as the starting point for the FTP.

We typically use SSL together with a PKI, but we have none set up yet, so we should use No SSL. Authentication can be Anonymous or Basic. Anonymous authentication allows access without logging in. We can allow access to Specific Users or All users.

At any time, we can go back to FTP Authorization Rules to specify users and the permissions they can get.

These are the basic steps to set up a vanilla ASP Core server:
1) Change the password on the Administrator account.
2) Use sconfig to do basic configuration (Change Computer Name, Set-up Networking, Set-up Time/Date)
3) Use sconfig to enable remote management (Remote Management via MMC, PowerShell, RMSM)
4) Install roles (check the codes above)
5) Connect to the Server Core using Server Manager on separate machine
6) Connect to IIS Manager on separate machine
7) Create an FTP site for server core using IIS Manager
8) Set up User Accounts