Persistence AppINIT

Malware Persistence: DLL injection via AppInit_DLLs Registry

Tools Used

    •  

Lab Requirements

    • Windows System (x86 or x64)
    • Tools
    • malware.dll (renamed legitimate .dll file)
    •  

One of the goal of the malware is to be able to achieve persistence inside the compromise system and one of the technique being implemented by these authors is to manipulate registry value.

In this demo, we will discuss how malware can persist on the system using AppInit_Dlls registry key.

Scenario: Your security solution detected that one of your organization’s endpoint is reaching a non-whitelisted domain/IP. By performing initial investigation, the user failed to report that she clicked and downloaded a email attachment few days ago.

CLI Packet Analysis

How to Perform CLI-Based Packet Analysis

Linux commands used in this demo.

    •  

Lab Requirements

    •  

Because employees are the most vulnerable targets for an organization, giving attackers the ability to compromise their targets by preying on human weakness like emotions. For this reason, adversaries plan their assaults intelligently by using phishing attacks.

In this demo, we will tackle about how to analyze a packet sample using tshark.

Scenario: You are tasked to examine the network log of an endpoint that may have fallen victim to a phishing attack.

To do this, run the tshark command below.

tshark -t ad -r 2021-08-19-traffic-analysis-exercise.pcap -Y ‘http.user_agent contains “curl” and http.request.method == GET’

In this case, our script returns all GET method from our .pcap file and we now have interesting output such as network traffic communication: 10.8.19.101 -> 185.244.41.29 HTTP 140 GET/ooiwy.pdf HTTP/1.1

#tip: filtering “curl” is good for identifying XSS

See Image #1 below for reference.

To do this, run the tshark command. tshark -t ad -r 2021-08-19-traffic-analysis-exercise.pcap -Y “http” | less

#tip: To get more details from this command we can use -x -V and pipe to “less” to browse the output.

In this case, we can see that there are insecure network communication.

See Image #2 for reference.

To do this, run the tshark command below.
tshark -Q -r 2021-08-19-traffic-analysis-exercise.pcap –export-objects http,<target_directory>
 
After successful execution, the exported http object can be found on your target directory and here we can run different command such as “file” and “xxd” to extract additional details.
 
Additional details: run “file <http_object>” to view its file type
Additional details: run “xxd <http_object>” to view hex.
Additional details: Exporting http objects includes some .txt files that contains details about the host.
 
See Image #4 for reference
 
 
 

Why this approach?

NSM solution (e.g. Security Onion) saves every log file to disk and its a cool thing to be able to remotely inspect these logs without opening GUI-based tools such as Wireshark and by using “export-objects” option from tshark we can export the dropped file and copy it remotely to our analysis machine.

 

Browser History

Extracting Browser History artifacts using Memory Forensics: Volatility

Tools used in this demo.

      • Firefox
      • Volatility
      • Notepad++
      • CMD
      • Powershell
      • strings sysinternals
    •  

Browser artifacts may contain valuable information that can help the analyst correlate evidence and timeline the incident during the investigation, this artifact can also reveal information such as URL, Attachments and etc.

In this demo, we will tackle about different ways to extract browser artifacts using memory forensic tool Volatility.

To be able to understand this demo, we will use a Firefox browser to browse “https://eyehatemalwares.com” as a sample URL of choice.

Next, we run Volatility pstree plugin to identify Parent/Child relationship.

Command: volatility.exe -f browserhistory.mem –profile=Win7SP1x64 pstree 

In this case, we identify firefox.exe:532 as a parent process of all firefox.exe processes. 

Now, we can use Volatility Yarascan plugin to search for all URL instances found inside the browser process.

In this case, we use this regex pattern: “/(https?:\/\/)?([\w\.-]+)([\/\w \.-]*)/”

Command: volatility.exe -f browserhistory.vmem –profile=Win7SP1x64 yarascan -Y /(https?:\/\/)?([\w\.-]+)([\/\w \.-]*)/” -p 532 > firefox_yaraURLscan.txt

Now, let us check “firefox_yaraURLscan.txt“. 

In this case, we use notepad++ for text editor tool to view the result. 

By performing few searches, we can see our target URL “https://eyehatemalwares.com

Now, let’s jump to the next section.

The next option is by using Volatility Memdump plugin. To do this, first we need to identify our target browser’s process ID.

Now, we run Volatility pstree plugin to identify Parent/Child relationship.

Command: volatility.exe -f browsinghistory.mem –profile=Win7SP1x64 pstree

In this case, we see firefox.exe:532 as a parent process of all firefox.exe processes.

Next, we run Volatility memdump plugin to dump the firefox process.

Command: volatility.exe -f browsinghistory.mem –profile=Win7SP1x64 memdump -p 532 -D .

In this case, we successfully dump firefox.exe:532 to our current working directory.

Now, let us extract all strings from this exported process.

To do this, we can use a tool strings.exe from sysinternals tools suite.

Command: strings.exe -a 532.dmp > demo_urlextract.txt

In this case, using notepad++ we can see all the strings extracted from our firefox.exe process.

In the next section, we will do filtering.

In this section, we will do the filtering side using powershell Select-String function.

To do this, open powershell.exe.

Next, run the following Select-String function.

Powershell: Select-String -Path .\demo_urlextract.txt -Pattern https?:\/\/(www\.)?[-a-zA-Z09@:%._\+~#=]{1,256}\.[a-zA-Z09()]{1,6}\b([-a-zA-Z09()@:%_\+.~#?&//=]*)” | findstr -i eyehatemalwares

Regex Pattern Used: “https?:\/\/(www\.)?[-a-zA-Z09@:%._\+~#=]{1,256}\.[a-zA-Z09()]{1,6}\b([-a-zA-Z09()@:%_\+.~#?&//=]*)”

In this case, we see that our target URL “https://eyehatemalwares.com”

Phishing IR

Phishing Alert Incident Response

Linux commands used in this demo.

    • ngrep
    • file
    •  

Lab Requirements

    •  

Because employees are the most vulnerable targets for an organization, giving attackers the ability to compromise their targets by preying on human weakness like emotions. For this reason, adversaries plan their assaults intelligently by using phishing attacks.

In this demo, we will tackle about how to respond to a phishing incident.

Scenario: You are tasked to examine the network log of an endpoint that may have fallen victim to a phishing attack.

To do this, execute these Linux’s “ngrep” script: ngrep -l <pcap_file> -q -Wbyline “^GET|POST^”

By executing this command, we now see an exchange of traffic from these IP addresses using non-standard and insecure ports. 

“10.8.19.101:49738 <-> 185.244.41.29:80

Next, we can perform threat intelligence by using the details extracted from above command.

Now, we can see the IP “185.244.41.29” was flagged by 4/94 AV Vendor as malicious.

To do this, execute these Linux’s “ngrep” script: ngrep -l <pcap_file> -q -Wbyline “HTTP” | more

Key Points to Know Here:

      • GET /ooiwy.pdf
      • File with .pdf extension will have magic bytes “%PDF” instead we see MZ (Portable Executable)
      • Hard coded User Agent: Ghost

Now, we know that we are up to something.

If you are more comfortable performing investigation in a graphical interface, we can use a tool like Wireshark.

Note: If you are not yet familiar with this tool, please visit this Wireshark Tutorial. Click Me!

To do this, first open Wireshark and filter using: ip.src == 185.244.41.29

#tip: Another approach is go to Statistics > Protocol Hierarchy > HTTP

Then, follow the HTTP Stream.

In our case, we can see similar “MZ DOS” result from running ngrep.

If you can recall from the previous steps, we see a .pdf file with MZ DOS (PE Executable).

Now, our task is to dump that object to disk.

To do this, first go to File > Export Object as HTTP > Save

The “oowiy.pdf” that the user downloaded is dumped to disk. 

We can now perform profiling of this object. To do this, we can run the “file” command.

Now, we can see oowiy.pdf:PE32 Executable, it means this is not a legitimate .PDF file.

Next, for the sake of this demo we submit the sample to VirusTotal[.]com for heuristic scanning.

In our case, we can see that 44/66 AV flagged this as malicious and some AV vendors detect is as Ryuk malware.

During a phishing incident, an analyst must be able to investigate an endpoint’s network traffic. Timing is crucial during this incident and being able to respond quickly and in a systematic way can be beneficial for the analyst and the organization.

In real world scenario, email attachments may contain sensitive information and sending the file to online scanner tool is not recommended for it will expose this information to other researchers or even adversaries.

 

Profiling Linux Binary

Profiling Binaries in Linux Systems

Linux commands used in this demo.

      • ls
      • readelf
      • stat
      • cat
      • fdisk
      • istat
      • debugfs
    •  

An effective analyst must be able to conduct investigations on any operating system and knows how to retrieve on-disk evidence.

Scenario: A suspicious file was found on /tmp/mal_dir and you are tasked to perform live forensics on one of your organization’s Linux systems to investigate the file and gather its metadata.

In this demo, we will tackle about binary profiling in Linux systems and understand when the binary first existed on the system.

In Linux systems, we are limited to perform the investigation using only the terminal console.

First, let us check the directory where the binary was discovered using the built-in command “ls -al” is the first step in performing triage.

In our case, we can see a non-directory file named “credstealer“.

Next, we can check the binary’s file header by using “readelf -h” command.

In our case, readelf reveals us the binary’s header. 

Key Points to Remember:

Magic Number = 7f 45 4c .. .. (ELF Magic Number)

Class = ELF64 (MZ Dos in Windows, means credstealer is a x64bit executable)

Type = EXEC (executable file)

Entry Point = 0x4006e0

The stat command gives information about the file and filesystem’s size, access permission, user ID, group ID, birth time, access time and modified time of the file.

We can use this command to profile our binary by running the command “stat credstealer“.

Key Points to Remember:

Size: 8280

Inode: 1836028 (What inode does is it keeps track of all the files and directories within Linux System.)

Uid – 1000/linux-analyst (User = 1000, Root = 0)

Gid – 1000/linux-analyst (Group = 1000, Root = 0)

Access, Modify, Birth and Change Time (This provides valuable information when it comes to timelining an attack)

Note: Notice that birth time is blank which gives us less info when the file was first existed on disk.

In the previous steps, we extracted file’s metadata which reveals the Inode of the file.

Now, let’s use the Inode to extract additional information.

First, list mounted disk information by using “fdisk“. #command: sudo fdisk -l | grep sda

Next, Use Sleauthkit’s istat tool. #command: sudo istat /dev/<diskname> <inode>

Note: Sleauthkit’s istat displays the uid, gid, mode, size, link number, modified, access, changed times, and all the disk units a structure has allocated.

At this moment, we have revealed additional information but we still don’t know when the file first existed in the disk.

To answer this, we can use the tool “debugfs“. command: sudo debugfs -R ‘stat <inode>’ /dev/<diskname>

In this case, crtime or birthime reveals that on Aug 12 17:35:26 2022 the file “credstealer” was created in the disk.

Note: The debugfs program is an alternative file system debugger. It can be used to examine and change the state of an ext2, ext3, or ext4 file system.

In real world scenario, crtime/birthtime information can be used as starting point to perform your triage during incident response on Linux systems. This is information can also be used to correlate to different data sources (e.g., Network, System, and Event logs)

Live forensics should be done after duplicating the system’s volatile image/disk to avoid tampering and maintain the system’s integrity.

Dumping Linux Module

Dumping Modules Associated with a Process in Linux Systems

Linux commands used to pull process modules.

      • pmap -d <PID>

Dumping process modules command: 

      • gcore -o <dest> <pid>

In this demonstration, we’ll use the built-in Linux tool “gcore” to dump an example process named bash with the process ID: 5885.

Scenario: We are instructed to examine this process on a live system under the assumption that bash:5885 is connecting a known malicious domain.

First, begin by using the command “ps aux | grep <target_process> ” to list every process that is currently active.

Next, run “sudo gcore -o <dir filename> <PID> ” after that. In this instance, 5885

Now, run “strings <dumped process> | grep .so” command.

#Tip: On Windows systems,.so, which stands for shared object, is similar to.dll.

#Tip: keep an eye out for unusual uses of common objects. search for anomalies. (For instance, the bash process makes use of the.so connection protocol to the internet.)

#Tip: Before doing live forensics, make a copy of the system’s volatile image.

Note: When a Linux system is vital and cannot be shut down for dead box forensics, live forensics is the only alternative.

Pulling Linux Modules

How to Pull Modules and Libraries Associated with
a process in Linux Systems

Linux command used to pull modules history.

  pmap -d <PID>

Linux command used to pull process info.
      • ps aux
      • top
      • htop
 
    •  

Similar to Windows, when a Linux system exhibits unusual behavior, it is thought that an executable file, a malicious process, or a malicious library may be at play.

The analyst must be able to look at the Linux process and its related libraries during this occurrence.

When it comes to malware analysis, the most frequent disk artifacts that malware leaves behind on a compromised system are the launched processes, produced modules, and libraries.

Before we dive into main topic, let’s discuss first the difference between .dll and .so files.

DLL – Dynamic Link Library

SO – Shared Object

In Windows systems, a process’s related modules and libraries will have the.dll extension (e.g., wininit.dll)

The related modules and libraries of a process on Linux systems have the extension.so (e.g., libc.so)

These executables have a set of features that are required by the process for smooth execution.

To pull all the associated modules and libraries from a Linux process we use a built-in tool called “pmap“.

First, identify the malicious process using the command ps or top.

Next, identify the PID or Process ID.

Now, use pmap command to pull the modules.

>>script: pmap -d <PID>

Linux Script

How to Document Script Execution in Linux Systems

Linux commands used to pull command history.

      • cat /home/<user>/.bash_history
      • history

Other commands used in this demo: 

      • script
      • md5sum

The analyst must record all the commands required to carry out the analysis while dealing on Linux systems.

Using a Linux built-in tool called “typescript” is one method of doing documentation.

This command records all keystrokes made on the terminal along with their results.

Simply enter “script” on the terminal to execute this command.

If typescript is already installed, a file named “typescript.txt” will be created on your desktop.

It is now able to capture every command script that will be run inside the terminal session.

Enter “exit” in the terminal to stop the logging session and save this file.

The typescript command is probably comparable to Linux’s “Bash History” and “History” commands, which keep track of all the scripts that have been run within the terminal.

Bash history can be found at: /home/<user>/.bash_history 

In this case, we can run the script below for demonstration.

>>script: cat /home/linux-analyst/.bash_history

>>script: history

Next, we can open our “typescript.txt” file.

We may now view logs that match those in our bash_history file.

It is necessary to record and protect the integrity of the files utilized by the analyst who completed the work while producing the report.

 

One of the investigator’s guiding principles is to record everything, and one method to achieve this is to use the built-in Linux function “md5sum” to hash the stored “typescript.txt” file.

>>script: md5sum typescript.txt