Thursday, November 27, 2008

Basic eBlaster forensic analysis

eBlaster is computer monitoring software offered by SpectorSoft. They also make a product named Spector Pro, which is very similar. The main differences between the two is eBlaster is designed for remote installations and reports of activity to be delivered by email, whereas SpectorPro is designed for someone who has physical access to the monitored computer to review the reports.

eBlater and Spector Pro are very powerful. The software is frequently changed so it remains undetectable by common anti-virus software. The following is some basic oberservations of a forensic analysis of a computer with eBlaster installed.

eBlaster can be installed remotely (SpectorPro cannot) by preconfiguring it with all the necessary options and then sent or given to someone to be installed. The main function of the program is to record all user activity such as screenshots, emails, instant messages, etc. and then to send a report of that activity via email:



Installation of eBlaster is fairly simple and merely requires a registration key and an email address to where the activity reports will be sent.

The eBlaster program uses some random folder/file naming techniques to make it a little more difficult to detect or locate. In all of my testing the software always installs some of the required files into a randomly named subfolder located in the "\windows\system32" folder. There are eight files installed into this folder during the installation, of which one is an executable (admin control panel), while the rest or either .dll's or files with misleading file extensions. The image below is an example of a folder randomly named "subitvox" under the "\windows\system32" folder:



The eighth file is in the subfolder named "canunsec" seen above. Each installation I performed, caused all of these files and folders to get random names. Additionally, there are several .dll files dropped into the "\windows\system32" folder.

One of the easiest ways to "detect" whether eBlaster has been installed, is to attempt to locate a simple text logfile that is created by the program. The file is always in the root of the randomly generated folder under "\windows\system32". The log file is a simple ASII text file and commonly had a .dll file extension. The log file has some very predictable text can easily be detected using a grep search:

11/27/2008 12:56:00: (AGT,EXPLORER) Initializing process for file C:\WINDOWS\explorer.exe Recording App 1 Blocking App 1
11/27/2008 12:56:00: (EBR,EXPLORER)
11/27/2008 12:56:00: (EBR,EXPLORER) Start Monitor - User lance on REG-OIPK81M2WC8
11/27/2008 12:56:00: (EBR,EXPLORER) Build Number 3067. Serial Number 1234567890
11/27/2008 12:56:00: (EBR,EXPLORER) Windows XP Home Edition Service Pack 1 (5.1.2600)
11/27/2008 12:56:00: (EBR,EXPLORER) IPC Message pump started.
11/27/2008 12:56:00: (SHR,EXPLORER) PacketProcessorEB::CreatePacketXML: Sending settings to server.

Some of the lines above have been word-wrapped by the blog, but normally each line in this text file will begin with the datestamp then the timestamp. The datestamp format is always "mm/dd/yyyy". The timestamp format is always "hh:mm:ss:". A simple GREP search of "##/##/#### ##:##:##:" would find this logfile, regardless of it's name, with minimal false positive hits.

The above method is the simplest manner to locate active logs generated from eBlaster, as well as fragments in unallocated, MFT records and $LogFile.

The eBlaster software itself is all coontrolled by several .dlls that are loaded via the registry. A random GUID is generated and placed in the HKLM\Softwae\Classes\CLSID key. Here is an example from one of the installations:

HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{7E116682-4410-4969-B8FA-5C3CCAE78026}\ProgID\: "Winoscmd"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{7E116682-4410-4969-B8FA-5C3CCAE78026}\InprocServer32\: "C:\WINDOWS\System32\chmucfav.dll"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{7E116682-4410-4969-B8FA-5C3CCAE78026}\InprocServer32\ThreadingModel: "Apartment"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{7E116682-4410-4969-B8FA-5C3CCAE78026}\: "Comivjob"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{AE256AD1-14D6-428F-BAEE-59B158AFFA0F}\InprocServer32\: "C:\WINDOWS\System32\midexkey.dll"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{AE256AD1-14D6-428F-BAEE-59B158AFFA0F}\InprocServer32\ThreadingModel: "Apartment"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\CLSID\{AE256AD1-14D6-428F-BAEE-59B158AFFA0F}\: "sapiclan"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Winoscmd\CLSID\: "{7E116682-4410-4969-B8FA-5C3CCAE78026}"
HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Winoscmd\: "Comivjob"

From a network perspective, upon initially booting the machine, a DNS request is made to a domain of "d2a1376gf-43ty-245a.com". That domain has the following registration information:

Registrant:
Spectorsoft Corp.
1555 Indian River Blvd
Bldg B-210
Vero Beach, FL 32960
U.S.

Registrar: DOTREGISTRAR
Domain Name: D2A1376GF-43TY-245A.COM
Created on: 23-MAY-07
Expires on: 23-MAY-09
Last Updated on: 10-APR-08

That domain currently resolves to the IP address of "209.61.133.199". This IP address is registered by a company named:

OrgName: Robust Technology
OrgID: ROBUST
Address: 12178 Fahr Park Lane
City: St Louis
StateProv: MO
PostalCode: 63146
Country: US

NetRange: 209.61.133.192 - 209.61.133.223
CIDR: 209.61.133.192/27
NetName: RSPC-22301-0007111720
NetHandle: NET-209-61-133-192-1
Parent: NET-209-61-128-0-1
NetType: Reassigned
Comment:
RegDate: 2000-07-12
Updated: 2000-07-12

After the DNS request, there is an initial posting of data to the remote server, most likely for licensing validity. This network traffic is sent via TCP port 443 in an SSL wrapper. Although you cannot easily see the contents, an initial or periodic communication to that IP address would be excellent indication that eBlaster is installed. The program will periodically send activity reports to that IP address based on how its been configured.

When in doubt simply booting a copy of the machine in question in a controlled network environment (no Internet access!) would yield some instant communications that would tip you off. Here is a screenshot of the initial communication upon booting the system (between 192.168.214.1 <> 192.168.214.134 on port 443):



The above testing wa done on the latest release of eBlaster as of 11/2008:

Friday, November 7, 2008

My current impression of cell phone forensic tools

As part of my work, I recently put together a fairly comprehensive cell phone forensic course. As part of the development phase of this project, I had a chance to use most of all the common cell phone forensic tools and put them through the paces with over 50 different phones, most of which were international models.

In opinion, the forensic industry is nowhere near where we are today with cell phone forensics compared to computer forensics. Mostly because it is a fairly new sub-field of digital forensics and the tools just have not been around long and have not yet evolved to the state where the current computer forensic tools are at.

I also think it is due to the complete lack of standardization by phone manufacturers. With computer forensics, you have different makes and models of computers and it generally has little effect on the analysis phase because how they each operate is standardized and follow a set of design specifications. Whereas in cell phone forensics, each cell phone manufacturer could be using their own proprietary operating system and each phone may operate completely different from other models by the same manufacturer. This makes developing an all-inclusive tool that can support all the manufacturers and models of phones very difficult and is something like hitting a moving target traveling at 200mph. By the time you develop a tool to deal with a specific phone, 5 more new ones have been released that don't follow the same standard(s).

**** I have no association with any of these vendors****
The following is just my experience and impressions of the current state of these tools, future version releases could improve or worsen their performance.

The tools I used and evaluated are as follows:

Cellebrite
http://www.cellebrite.com/

Neutrino (Guidance Software)
http://www.guidancesoftware.com/

Mobile Phone Examiner (AccessData)
http://www.accessdata.com/

Secure View (DataPilot)
http://www.datapilot.com/productdetail/253/producthl/Notempty

XRY
http://www.msab.com/

XACT
http://www.msab.com/

Paraben
http://www.paraben-forensics.com/catalog/product_info.php?cPath=26&products_id=343

Fernico ZRT
http://www.fernico.com/zrt.html

Project-a-phone
http://www.projectaphone.com/

To first summarize my experience and findings, I would rate my top three tools as:
Cellebrite
DataPilot
XRY

The reason for rating these tools as my top three tools is based on this criteria:
Functionality
Supported phones
Ease of use

Cellebrite
Currently, the only tool evaluated that can handle iPhones. This was not a deal-maker/breaker for me, but it is worth noting. This is a very simple to use hand held device that can be brought out into the field. I would love to see it have an internal battery to facilitate true in-the-field information gathering. This device handles many different phone models. It supports cable connections to phones as well as bluetooth. It cannot be any simpler to use, clear & easy menu driven screens guide the operator through the acquisition phase. Information can be sent immediately to an attached computer or saved to a USB flash drive, so it can be handed to an investigator for review.

DataPilot (Secure View)
Nice compact kit. Comes with an excellent cable kit that supports many different phones. This is a software solution that really only involves cables and a security key to enable to software. The software is simple to use. Generates nice clean reports.

XRY
XRY is a kit that comes in a fairly large box (suitcase). It comes with several cables, but not as many as Cellebrite or DataPilot. The XRY device itself is fairly small and self-explanatory with clearly labeled ports and connections. The device can be powered by a wall plug or by USB port, making field acquisitions very easy. The software interface is very simple to use and it supports a large number of phones.

For the rest of the devices I used and evaluated, the following are some of the findings and experiences that were relevant to my rating of these devices:

Neutrino
This device is an add-on to EnCase. It comes in a very large case. The biggest downside to this product is the lack of support for phones. The number of phones this device supports and can extract data from is very low. The ability to read non-US models is also very very low.

AccessData MPE
Notwithstanding all the known and previously discussed issues with FTK 2.0, I found this product to be very "clunky" and not too intuitive. I had common problems with the licensing of the MPE module and it not recognizing phones that were connected. Phone support it also very low. Ease of use is very low.

XACT
XACT is the only tool that is focused on getting a physical image of a phone. I was very excited to see this product and try it out. The hardware and software is almost identical to XRY. The biggest disappointment I had with this product is that it just didn't work or support many phones. Even the phones it said it supported, I had trouble with and later found out that it only supports phones with certain firmware. So if the documentation says it supports a Motorola SLVR L7, it may not work if that phone is using a certain firmware version. XACT can parse the "physical" image of some phones and break out the data into categories and show logical data, such as SMS, photos, etc, but this does not work on all models of phones. I didn't mind this because I could still look at the physical image, but unfortunately many of the phones I tried simply would not work because the firmware version was not supported. I was very happy that an old Motorola SLVR L7 that I examined, XACT was able to pull a physical image, but not parse the data. A manual search of the data resulted in several SMS messages that were deleted and were from 8-9 months in the past. The bummer was that when I tried three more Motorola SLVR L7 phones, a physical image could not be obtained because of an unsupported firmware version on these phones.

Paraben
This device suffers from many of the drawbacks as Neutrino. It does not support many common phone types. As Neutrino, it needs drivers installed for many of the phones.

Fernico ZRT
This really isn't a forensic tool, but rather a solution to process phones manually. It includes an awesome desk clamp, camera, microphone and software so that if you need to process a phone that isn't supported by one of the above tools, you can manually go through the phone and record everything as you do it. This is hands down my tool of choice when having to process or deal with phones that a forensic tool cannot process or when I want to manually capture something on a phone.

Project-a-phone
This tool is similar to Fernico, as it is used to manually process a phone and record right off the phone's screen as the investigator cycles through the phone screens. I found this product to be very low-quality and cheap looking. The camera image is very poor and not very usable. I would not recommend using this product at all.

Wednesday, October 22, 2008

If you could have any EnScript or filter, what would it be?

So I might be opening a can of worms with this post, but what the heck, I am bored. My question is if you could ask for any EnScript to improve your process, speed things up, or just give you a feature you don't natively have in EnCase, what would it be? It could be eDiscovery related or forensic related or just a general utility (tetris anyone?). It also does not need to be a stand-alone EnScript, it could be a filter/condition.

I am interested in hearing what the most popular request will be. Please post your "favorite request" in the comments of this post so others can see it, expand on it, tweak the idea or just echo your vote.

Let the wish-list begin....

Monday, October 20, 2008

SANS Forensic & Incident Response Summit in Las Vegas

SANS held a Forensic & Incident Response Summit last week (Oct 13-14) in Las Vegas. It was really nice to go and put so many names and people that I have communicated with in the past via email, with a face. It was a pretty interesting crowd that attended and some very informative presentations.

I did a presentation at the end of the first day to talk about some basic simple forensic & incident response tools and methods that seem to work well for me. I have posted the PDF of my presentation here.

For those of you that have not tried out the F-RESPONSE tool, you are really missing something quite useful. The founder of F-Response, Matthew Shannon, who was at the summit, announced on day one of the summit that version 2 of the F-RESPONSE tool was being released and it now supports access to physical memory on a remote machine. This means that using the F-RESPONSE tool you can image any and all physical disks on a remote machine, as well as the physical RAM on that machine, all while the machine is running!! You can read more about their latest verison here.

Aaron Walters also presented on how Volatility can utilize the F-Response tool with a new spin-off of Volatility that he created called "Voltage". A very cool tool to analyze the memory dump and show you what was going on at the time of the memory capture. The really cool thing is that Voltage can look at the memory live, in real time using the F-RESPONSE tool, meaning that you can look at it now and then refresh the page 2 minutes later and what you are seeing is the live reresentation of the memory contents 2 minutes later, not a captured image of it. As Aaron likes to say, he can actually watch the clock tick on the remote machine in memory!! VERY COOL!!! You can read about Volatility here.

Monday, September 15, 2008

EnScript to bookmark the MFT record of currently highlighted file in EnCase

I wrote this EnScript years ago and recently had a need to use it on some evidence. I realized I had not posted this before on the blog so I figured I would post it in case others had a similar need.

There are times when I want to look at the actual MFT record of a specific file. The most common reason is to look at the second set of timestamps that each MFT record has in the filename attribute. EnCase shows the first set (the ones in the Standard Information Attribute) in the table pane of EnCase, and normally that is sufficient. But there are times when I want to look a the second set of timestamps to see if the file's timestamps have been altered or to help establish whether a file was copied or moved onto the media. This EnScript simply looks up the corresponding MFT record for the currently highlighted file and then bookmarks it (all 1024 bytes of it):



Highlighting simply means to click on it in the table pane of EnCase (upper-right) and turn the entry blue, no need to highlight or sweep any data in the actual file. Once a file is highlighted, run the EnScript and you will get the following message:



Click "Ok" and then check your bookmarks:



You can then quickly inspect the actual raw MFT record to decode it manually or view any residual slack data, etc..

Download Here

Monday, September 8, 2008

Parse IIS FTP logs

I recently had an investigation involving the IIS FTP service. It involved an unauthorized person getting access to a specific user account and then being able to login via the FTP server and download several confidential files.

When reviewing the FTP logs, which had numerous legitimate logins everyday, I found an immediate need to isolate the logins of the specific compromised user account. I could easily do this using a keyword search, but then found a lot of the FTP log information was co-mingled with legitimate FTP traffic, making it hard to follow. I decided to write an EnScript that parsed the FTP logs and broke out each FTP session into its own log file.

The IIS FTP log is similar to the IIS web log, except not nearly as verbose. There are several defined fields that look like this:

#Software: Microsoft Internet Information Services 5.0
#Version: 1.0
#Date: 2005-10-29 17:04:39
#Fields: time c-ip cs-method cs-uri-stem sc-status
17:04:39 192.168.11.173 [1]USER anonymous 331
17:04:39 192.168.11.173 [1]PASS guest@unknown 230
17:04:46 192.168.11.173 [1]QUIT - 257
17:04:49 192.168.11.173 [2]USER anonymous 331
17:04:49 192.168.11.173 [2]PASS guest@unknown 230
17:10:51 192.168.11.173 [2]QUIT - 257
17:10:54 192.168.11.173 [3]USER anonymous 331
17:10:54 192.168.11.173 [3]PASS guest@unknown 230
17:11:33 192.168.11.173 [3]closed - 426

There are a couple of important pieces of information contained in each line:
1. Timestamp
2. Source IP address
3. Session number
4. FTP command
5. FTP result code

Using the session number (the number in the brackets) the EnScript parses through the file and pulls out all the associated log entries for each specific session and writes it to new file. For my specific purpose, it made things much easier. The EnScript will write out each session and name the new file with the session number, combined with the original filename that the entry came from and also the user account the specific FTP session is concerning, if it can be determined.

Using the IIS FTP log quoted above, and then selecting the log file and running the EnScript, the following files are created in the default export folder for the case:



The contents of each file is the FTP activity for that specific session:

17:04:49 192.168.11.173 [2]USER anonymous 331
17:04:49 192.168.11.173 [2]PASS guest@unknown 230
17:10:51 192.168.11.173 [2]QUIT - 257

By sorting and looking at the naming convention of each file, I could then quickly look at the FTP activity for the compromised user account and I could quickly identify large amounts of activity by the size of each log.

Hopefully, someone else has a use for it as well.

Download Here

Saturday, September 6, 2008

Search for keyword in selected file(s) and then parse till double CRLF

A friend contacted me this past week about a problem he was having parsing a large amount of data in unallocated. He had been searching for specific data that used to be in a text file and had since been deleted, but was still in unallocated. The data had a pretty logical structure, something like this:

label1:field1 label2:field2 label3:field3
label4:field4 label5:field5 label6:field6

label1:field1 label2:field2 label3:field3
label4:field4 label5:field5 label6:field6

.....

He wanted to parse out the data back into a text file so he could process it some more, but it needed to be one complete record per line. I wrote an EnScript that asks for a keyword. The keyword should be a unique keyword, that in this case was the text "label1" found at the begining of each record. The EnScript then parses from the keyword hit until it reaches a double CRLF. It then prints out the parsed data on one line to the Console tab.

Here is an example of the text in text view within EnCase:



So in this example, you would run the EnScript and enter a unique keyword that is found at the begining of each line, in this case "label1" appears at the beginning of each line and the EnScript will parse from the keyword hit until a double CRLF is encountered.



The result looks like this in the console tab:



I figured I would post the EnScript in case anyone else has a use for it.

Download Here

Monday, September 1, 2008

EnScript to parse USNJRNL

* UPDATED (11/29/08) - v1.1 - Improved parsing of large USNJRNL files
* UPDATED (03/17/10) - v1.2 -  Added export to CSV functionality

The USNJRNL is a file system transaction log and it is located in the $EXTEND folder of a NTFS volume. This file system feature is available in Windows XP and greater but is disabled in XP by default. In Vista this feature is enabled by default.

The file system journals changes to files into this log, even if the data itself in the file is not changed, but rather changes to the metadata to the specific file.

The USNJRNL consists of one main file and two alternate data streams. The structure of the data in the USNJRNL•$J (as displayed in EnCase) file is pretty straight forward and is detailed below:

Offset(in hex) Size Description
0x00 4 Size of entry
0x04 2 Major Version
0x06 2 Minor Version
0x08 8 MFT Reference
0x10 8 Parent MFT Reference
0x18 8 Offset of this entry in $J
0x20 8 Timestamp
0x28 4 Reason (see table below)
0x2B 4 SourceInfo (see table below)
0x30 4 SecurityID
0x34 4 FileAttributes
0x38 2 Size of filename (in bytes)
0x3A 2 Offset to filename
0x3C V Filename
V+0x3C P Padding (align to 8 bytes)

The following EnScript parses the USNJRNL•$J file and displays the filename, timestamp and reason code to the console tab of EnCase and to a CSV file in the default export folder.

A definition of the reason codes are as follows:

Flag Description
0x01 Data in one or more named data streams for the file was overwritten.
0x02 The file or directory was added to.
0x04 The file or directory was truncated.
0x10 Data in one or more named data streams for the file was overwritten.
0x20 One or more named data streams for the file were added to.
0x40 One or more named data streams for the file was truncated.
0x100 The file or directory was created for the first time.
0x200 The file or directory was deleted.
0x400 The user made a change to the file's or directory's extended attributes. These NTFS attributes are not accessible to Windows-based applications.
0x800 A change was made in the access rights to the file or directory.
0x1000 The file or directory was renamed, and the file name in this structure is the previous name.
0x2000 The file or directory was renamed, and the file name in this structure is the new name.
0x4000 A user changed the FILE_ATTRIBUTE_NOT_CONTENT_INDEXED attribute. That is, the user changed the file or directory from one that can be content indexed to one that cannot, or vice versa.
0x8000 A user has either changed one or more file or directory attributes or one or more time stamps.
0x10000 An NTFS hard link was added to or removed from the file or directory
0x20000 The compression state of the file or directory was changed from or to compressed.
0x40000 The file or directory was encrypted or decrypted.
0x80000 The object identifier of the file or directory was changed.
0x100000 The reparse point contained in the file or directory was changed, or a reparse point was added to or deleted from the file or directory.
0x200000 A named stream has been added to or removed from the file, or a named stream has been renamed.
0x80000000 The file or directory was closed.

(http://msdn.microsoft.com/en-us/library/aa365722(VS.85).aspx)

Download Here v1.0
Download Here v1.1
Download Here v1.2

Friday, June 6, 2008

EnScript to do Credit Card LUHN test + a little more

I was working on writing a custom EnScript for a friend and decided to make a small little LUHN validation EnScript to test the validity of a credit card. I later used the code to incorporate into the custom EnScript, but decided to post this little utility in case someone else has a use or wants to look at it for code ideas.

A LUHN validation test is a mod 10 algorithm that can test the validity of a credit card number. It does not guarantee that the number is actually a working credit card number, it just satisfies one of the major requirements in order to qualify the number as a valid credit card number. The process is explained here at wikipedia and also here is a good graphic from http://www.thetaoofmakingmoney.com/2007/04/12/324.html that illustrates the process:



The problem with just doing the LUHN test is that you will get false positive hits such as a credit card number of "6666666666666664" that passes the LUHN test, but is actually not a valid card number. EnCase has a built-in function called isValidCreditCard(). Since that function is a built-in function, there is no way to see exactly what the function is checking for, but it seems to do just a simple LUHN test as described above, because it passes the number of "6666666666666664" as being valid.

There is general table of credit card vendors located here: http://en.wikipedia.org/wiki/Credit_card_number that explains that the first few digits of a credit card number correspond to the card type (i.e. Visa, Mastercard, etc.) So using this list, I wrote the EnScript to validate the number using the LUHN algorithm, then to check this chart to see if it is assigned to a known vendor.





If the credit card number passes the LUHN test and is assigned to a known vendor according to the table discussed above, the following screen will be displayed:



Written and tested in EnCase v6.10

Download here

New version of EnCase includes stand-alone utility to capture RAM

Guidance Software has released version 6.11 of EnCase this past week. For awhile now, EnCase has had the ability to collect the RAM for the local machine that it is running on, as well as a remote machine in the Enterprise version. This was all great, but for the folks who had the Forensic version, it did not offer any realistic use, since nobody was going to load EnCase on a target machine in order to dump RAM.

The new version of EnCase (v6.11) now includes a stand-alone RAM collection tool called 'winen.exe'. This tool is installed with the standard installation of EnCase and is placed in the EnCase home folder. This tool was designed to be deployed on a flash drive or other media to a target machine and then used to collect the RAM contents before the system is shutdown. The tool collects RAM and places the collected information into an .E01 file automatically, which can then later be loaded into EnCase when doing the analysis portion of the response.

There is a 32-bit version as well as a 64-bit version located in the EnCase directory. To use, you can copy the "winen.exe" file or "winen64.exe" file to a flash drive or removable HD and then invoke at a command prompt or double click on the Windows GUI (not recommended). The tool is a console application so if it is launched in the GUI, it will spawn a command shell and then close after it completes.

No command line options are required when you run it, but by supplying some, you can skip the need to enter some information. When run with no command line options, you will need to answer six questions:

Z:\>winen.exe
Please enter a value for the option "EvidencePath":
Z:\memdump
Please enter a value for the option "EvidenceName":
ram2
Please enter a value for the option "CaseNumber":
001
Please enter a value for the option "Examiner":
Mueller
Please enter a value for the option "EvidenceNumber":
001
Please enter a value for the option "Compress":
0

The first question is a little deceiving by the wording. You must supply the path AND filename of the .e01 file you want to create. In this example, I created a file named "memdump" (.E01 is automatically added) on the Z drive. The EvidenceName option is for the name of the object inside EnCase. Once it is loaded in EnCase it will appear with that name in the Tree Pane. The rest are self-explanatory. The Compress option takes one of three values, 0=none, 1=good, 2=best.

By running "winen.exe -h" you can get a list of command line switches that let you specify all of this at the command line, making it completely automated and non-interactive for use in batch files, etc. You can also use the '-f' option to specify a text config file with these options. An example config fie is included in the EnCase home directory.

Z:\>winen.exe -h
Usage: [Options]
-p : Evidence File Path
-m : Evidence Name (Max Size:50)
-c : Case Number (Max Size:64)
-e : Examiner Name (Max Size:64)
-r : Evidence Number (Max Size:64)
-d : Compression level (0=None, 1=Fast, 2=Best) (Default: 0) (min:0 max:2)
-a : A semicolon delimated list of Alternate paths
-n : Notes (Max Size:32768)
-s : Maximum file size in mb (Default: 640) (min:1 max:10485760)
-g : Error granularity (Sectors) (Default: 1) (min:1 max:1024)
-b : Block size (Sectors) (Default: 64) (min:1 max:1024)
-f : Path to configuration file
-t: Turns off hashing the evidence file (default: true)
-h: This help message

The end result is a '.E01" file that loads into EnCase:



As far as the footprint left on the target system, the "winen.exe" program creates a service named "winen_". This obviously means that you need to have administrator privileges on the target system in order to use this tool. The program also drops a driver file named 'winen_.sys', in the directory where the program is run. Notice there is already a 'winen_.sys file' included in the EnCase home directory that you can copy and use with the winen.exe program, or when run, the program will create a new one. Take note of this, because the 'winene.exe' program creates a Windows service, this means there are changes in the registry occurring. Here is a screenshot of the service entry in the registry after running the tool on the target system:



The *downside* is that this service entry remains after the tool is run and if it is run from local media (i.e. C drive) then this service starts every time the system is rebooted.

The *upside* of this tool as it appears to dump the contents of RAM for all versions of the Windows NT platform, i.e. Windows 2000, XP, 2003 & Vista! Currently, there are very limited number of tools that can do this and most are not free!

I have not seen any "official" technical specs or details on the tool's capabilities, but the preliminary tests I have done seem to be very promising as a very useful Incident Response tool, especially if you are already an EnCase v6 user, then you just obtained a simple tool that can collect RAM on all of today's modern Windows platforms at no extra cost!

Nice job Matt! ;)

Wednesday, May 28, 2008

EnScript to export selected search hits

This week I was working a case where I was reviewing hundreds of IIS web logs. I had done a keyword search for some unique patterns involving SQL injection. Once found, I want to export just those lines (IIS web logs are one entry per line). So I wrote a quick EnScript that basically exports one complete line that the keyword is found in.

The way the EnScript works is it seeks to the position in the file where your search hit is found, then it backs up until it finds a carrigae return/line feed, then exports from the next character after the CR/LF to the next CR/LF, thus exporting one complete line. This is the format of IIS web logs, but it could work with any text file that uses CR/LF at the end of a line.

To use, conduct your keyword search against any logfiles. Then SELECT (blue check) the search hits you want exported. You can select the whole search tree or just individual search hits, it's up to you. The following example is a screenshot of an old IIS web log:



Imagine you wanted to search through thousands of IIS web logs for the key word of "%5c" and you ended up with a couple hundred hits that you want to export out for reporting reasons or to put into an excel spreadsheet for analysis purposes. The next screeshot shows the search hits after the keyword search:



Select the keyword hits you want to export:



Run the EnScript and look in the default export folder for that case for a file named "searchhits.txt". You can import this into excel or use any text editor to see the exported data:



The result is a text file with only the lines that contain your selected search hits.

Download here

F-Response to the rescue!

A few weeks ago, I received an evaluation version of the new F-Response tool. Although I knew it was coming and I was excited to try it out, I received it while I was out of town and when I returned I was inundated with work and could not play with it immediately as I had hoped, so instead it sat in the shipping envelope in my car.

Last week, I was called by a company who has been the victim of the SQL injection attack. They were frantic and wanted help immediately. I saddled up and grabbed by response kit and met with the company. After getting all the particulars, I responded to the data center where there were two computers that needed to be imaged.

As I setup my gear, The system admin explained that their main back-end SQL server was tied to *everything* and there was no cluster or back-up server, so I could not shut the system down or even reboot it, as it would interrupt their business. I thought, no problem, I will image it live. As I looked at the Dell 2U rack server, I noticed one USB port on the front and two on the back. I collected volatile data and saved the data off to a small USB flash disk. I noticed that the volatile data collection was taking a lot longer than normal. I then asked how old the server was and if the USB was 2.0 or 1.1.....uh-oh.....1.1

I then examined the installed hard drives and found there were 5 SCSI hard drives making up a RAID 5 system. The operating system saw one physical disk, consisting of two logical partitions, totaling 1.1TB.

After he told me it was USB 1.1, I paused for a bit thinking through all the possible scenarios:

A. Use USB 1.1 and save the live image off to a removable USB hard drive
B. Insert a USB 2.0 card (required a reboot and this was not an option)
C. Insert a Firewire card (required a reboot and this was not an option)
D. Use netcat/cryptcat to throw the image across the network to another device
E. Use FTK imager and save the image to a network share.

I figured I would try option A and see how long the image would take. After setting everything up, I started FTK imager and it began to level out at 440 hours....hmmm...440/24 = 18.3 days... ouch!

So option A was out. After thinking a bit, I decided to use option F, F-Response! I remember that I had the package with me, but had not tried it out yet. I retrieved the package from my car and set up a VMware machine on my forensic laptop and went through the installation. I then tested it out using EnCase as the imaging platform and found it worked flawlessly.

I was still concerned about sending 1.1TB of data across the network wire that was actively being used by clients and the web server. After digging around a bit, I found a separate gigabit NIC adapter on the back of the server that was not being used, so I used a crossover cable connected directly to my laptop and statically setup some IP addresses. I then copied the F-Response client application to a flash disk and ran it on the target server. Two minutes later, I had a direct connection and the 1.1TB drive was showing up on my forensic laptop as a local drive. I started EnCase and previewed the drive. I started the imaging process using EnCase and it reported 30 hours until completion, much better than 18 days..;)

--fast forward-->

28 hours later the image was done. When the dust settled, I had an EnCase image file sitting on a Lacie 2 TB removable drive that was complete and verified correctly.

The F-Response setup process took all of about 5 minutes and was extremely easy. There is a very small learning curve in order to understand how it works. The best part of it is that it allows you to use whatever forensic platforms you normally use, the F-Response tool is not a forensic analysis tool itself, but instead is a type of conduit that connects remote hard drives to your local workstation so that your traditional tools can be used.

Hogfly posted a cool video of using F-Response here: http://forensicir.blogspot.com/2008/04/ripping-registry-live.html

Harlan also posted a blog about this tool here:
http://windowsir.blogspot.com/2008/05/f-response.html

There is also a great little demo video on how the tool works on the F-Response website: http://www.f-response.com/

If you have not seen this tool yet, I highly recommend you take advantage of their $100 trial version. Their field kit, consultant and enterprise versions are insanely priced compared to the price point of other forensic tools. Once you see or try this tool I think it will find a permanent home in your response kit, like mine has!

Friday, May 16, 2008

Summary report of file types by extension

I received a request from a friend asking if there was an easy way in EnCase to summarize all the file extensions and the number of files for each extension (like in FTK). Sounded like a useful EnScript.. ;)

The following EnScript will create a list of all the file extensions as EnCase sees them and then counts the number of files in each extension group. The output is printed to the CONSOLE tab. In addition, a file is created in the default export folder named "File Count by extension.csv" that can be opened in Excel for sorting and additional formatting.

Download Here

** I have posted the readable .EnScript version of this script as a learning exercise since this EnScript is pretty simple, easy to follow along and a good one to learn from.

Friday, May 2, 2008

Find duplicate files

This post is not really "forensic" related but it may have use to others...

If you are like me, you have several hard drives laying around that have all sorts of "stuff" on them. Things you have acquired over time and save with the thought that the information, programs or files may come in handy someday in the future. Then after I fill that drive up, I end up upgrading to a larger drive, but I don't want to delete anything in case I need it! After reviewing several of these drives, I found I had lots of duplicate files scattered throughout the drives in various folders.

My solution to this was to write an EnScript so that I could preview all my drives at once and then the EnScript will hash all the files and list all the duplicates for me. I can then decide if I want to manually delete the dupes or not. It also lists how much space is wasted by having all the dupes.

Most of my drives are formatted using NTFS. The NT file system has a feature called hard links. This basically allows you to have multiple directory entries for the same file, but it only takes up the space of one file. This is because all the directory entries can point to the same MFT record and the same data. For example, you can have a file named "lance mueller.doc" in a folder named "e:\documents\", and then have a file named "john smith.doc" in a folder named "e:\old stuff". There are two separate directory entries and the names don't even have to match, but they can point to the same exact MFT record number, and therefore to the same exact data, reducing the amount of spaces being wasted by having duplicates. Opening one of the files and editing it affects both directory entires.

I took the below listed EnScript one step further and actually have the EnScript locate an original file, then for all duplicates, it deletes the dupes and then creates a hardlink to the original file. This basically leaves the directory entry in place, but reduces the amount of space being wasted by pointing all the dupes to the same data as the original. I am not making that version of the EnScript available yet, but I am posting the EnScript that will list all the dupes in the console pane of EnCase and then you can decide what to do with the dupes.


Download Here

Monday, April 21, 2008

Additional Bitlocker Incident Response tips

In January, I posted some Incident Response tips on how to deal with a Vista system with Bitlocker enabled. You can read the initial post here. I was recently doing some training and we discussed Bitlocker techniques in depth and decided to post a follow up with some additional tips.

The first thing you must do when you roll up on a system running Vista is to determine if Bitlocker is enabled. Remember that Bitlocker is only available in the Enterprise and Ultimate editions of Windows Vista. A quick look at the system properties should tell you what version you are dealing with:



There are a couple of easy ways to determine if Bitlocker is enabled. The first method is to simply open Windows Explorer and look at the logical drive list. Bitlocker requires two logical volumes. One for the OS and user data files and the second is a small boot partition 1.4GB in size, that is not encrypted. By default, Windows assigns the logical drive letter "S" to this small boot partition:



Also, NTFS is the required filesystem on the logical volume encrypted with Bitlocker.
The second method to determine if Bitlocker is enabled is to simply look at the Bitlocker configuration from the Control Panel:



Finally, you can open an Administrative Command Prompt and use the following command to check the status of Bitlocker:

"cscript manage-bde.wsf -status"



This last option tells you that the logical volume "C" is encrypted with Bitlocker and that an external key (USB) and a numerical password are being used as protectors. This tells the investigator that there must be an external USB device with a key on it (.BEK) extension and that there may be a numerical password written down somewhere. The password is very long consisting of eight groups with six numbers in each group, such as: "363319-629200-548735-017523-429363-314292-005962-259380". The status output also tells you if Bitlocker is currently enabled.

Once you have determined that Bitlocker is in fact installed and enabled, the investigator now has to decide how to handle this volume so later analysis can be performed. There are a couple of options available at this point. The investigator could use a live response CD and make an image of the logical drive while the system is running. It is important to understand that a LOGICAL image must be taken, because it uses the Windows API in order to obtain the decrypted data. If a physical image is taken, you will end up with a full image of the hard drive in its encrypted state and then you will have to deal with decrypting it later in order to perform an analysis.

Another solution is to disable Bitlocker. Disabling Bitlocker does not decrypt the data, in turn altering each file. Instead, it stores the key on the disk so that it can be decrypted the next time it is booted or accessed without the need for the startup key or numerical password. The following command shows how to disable Bitlocker from the command line:

"cscript manage-bde.wsf -protectors -disable c:"



The above command will disable Bitlocker (not decrypt) and if later attached to another Vista machine using a write blocker, all the data will be visible and available for imaging.

The investigator should also obtain the numeric recovery password as a safety measure to ensure later access to the drive for imaging. The following command will display the numerical recovery password:

"cscript manage-bde.wsf -protectors -get c:"



Here is a video showing all the above commands:

Monday, March 17, 2008

XOR entire file or selected text

XOR is a common and simple symmetric encryption algorithm. It is commonly used by malware to 'hide' certain identifiable information in a data file or executable. It is a very simple algorithm, so there is very little processing power needed to quickly encrypt or decrypt data, making it a popular technique.

Some software vendors also use it to 'obfuscate' data. Norton Antivirus uses it to store identified malware files in the quarantine folder. When Norton AV detects a virus, it will XOR the virus with a constant key and then place it in the quarantine folder. I had previously written an EnScript to extract files from the quarantine folder in Norton version 7.5 so they could be examined in their native form. Norton also stores its logs encrypted using XOR (most versions). I wrote this EnScript specifically so I could quickly decrypt Norton logs within EnCase when doing an investigation so I could see what kind of virus activity had recently taken place, but then I quickly found other uses for the EnScript.

The EnScript allows you to simply highlight (highlight, not check) a file in the table pane (upper right) of EnCase and then supply a hex value as the XOR key.



You can have the resulting XOR data displayed in the console, or if dealing with binary data, such as with a malware executable, you can have the data written to a local file that you can then examine with some other 3rd party tool.

Download here (EnCase v6)

Tuesday, March 11, 2008

Export IE Internet History from unallocated for use with 3rd party processors

A user recently contacted me about an old v4 EnScript that was used to export Internet Explorer Internet history from Unallocated so it could be processed with NetAnalysis. She asked if I would update the EnScript to work with v6 since she explained that she used NetAnalysis with almost every case and they have become accustomed to the output.

I have updated the EnScript to work with V6. Simply select (blue check) and file(s) you wish to search for IE Internet History (Unallocated, pagfile.sys, hiberfil.sys, etc.) and then run. If any history is found, it will be exported to a file that can then be parsed by NetAnalysis (or other 3rd party tool).

Download Here

Sunday, March 2, 2008

Ghost - Part Deux - How to detect its use

I decided to keep going a little farther with my research and see what I could come up with to detect ghost was used either in the past or to make an image that was being provided to an examiner. Plus, I had already shot half my weekend, so why not finish the whole thing off!

The first thing I will mention is the "fingerprint" that ghost can leave behind. A reader left a comment asking about using "-FNF" command line switch when making an image so that the ghost application does not mark the hard drive(s). I did not use the "-FNF" switch at any time when I made any of my images and as long as I used the "-IR" switch, an exact duplicate was made of the source disk each time and ghost did not modify the original hard drive. BUT, I was asked at the time ghost started up if I wanted to mark the hard drives with the following screen:



I answered "Continue without marking drives".

A hash baseline was taken of the source drive before ghost was ever used and the before and after hashes of the source remained constant regardless if I used the "-FNF" command line switch or not. When I did specify the "-FNF" switch, I was still prompted with the screen above. As long as I answered "Continue without marking drives" The original remained untouched.

By answering "OK" to the screen above, ghost DOES MARK ALL DRIVES it detects! Now obviously the source drive would most likely be connected via some type of write-blocking device so ghost would not be able to write this change to the source, but it may mark other drives connected to your system that you don’t want marked. If you do answer "OK" a "fingerprint" is placed one sector prior to the volume boot sector, typically at physical sector 62:



Once marked, ghost will stop asking the question at startup if you would like to mark the drives. I have not yet deciphered the “fingerprint” data written to this sector yet, but it appears the first dozen or so bytes remained constant between different markings, but the rest of the data changed each time I marked the drive, indicating some type of possible date stamp. When creating the image using the "-IR" command line option, if the source drive is marked, then the destination image will be marked as well.

The second part of my testing involved looking at how ghost copied files from one drive to a destination drive, either directly or through a ghost image file. If ghost was used with the "-IR" command line switch then ghost leaves everything in the same place as the original. My main focus was inspecting the changes with the default option of ghosting a source drive and then restoring it out to another drive like a typical user or IT professional would do. When I created a ghost image of a source drive and then restored that image, all the internal file system creation dates remained unchanged from the source's timestamps.



Looking at the screenshot above you can see the timestamps of the internal file system objects from the original drive have carried over to the ghost copy. This was true if I made a disk to disk image or a partition to partition image.

The only obvious artifact that ghost was used is the file extents field. The file extents field indicates how the file is fragmented. You can see in the original that the "Unallocated Clusters" object represents all of the unallocated clusters across the logical volume. In the source, there are 700+ "patches" of unallocated sprinkled around the drive. In the new ghost image, there are only three. The first is a small area consisting of 16 sectors immediately following the $AttrDef internal file. Then there is a large gap immediately following the $MFT, which is considered the "MFT zone" and is usually about 12.5% of the volume size. This area is reserved for use by the MFT as it grows. Immediately after the MFT zone are all the other remaining files on the volume, with no unallocated gaps in between them. Not all files are contiguous on the ghost image, but there are no files with unallocated before or after, except the ones I mentioned. The following screenshot is of the original source drive showing obvious signs of fragmentation:



Here is another contrasting example by viewing the file extents of the Unallocated Clusters object:



The newly ghosted drive only has the three aforementioned areas of unallocated space. There are still files present on the newly created ghost image that have multiple file extents (fragmented), but there are far fewer than on the source drive. This screenshot is of the drive that received the ghost image:



Summary:
Although there are a few indicators such as the presence of the "fingerprint" that indicated ghost has "marked" that drive at some point in the past, there is no surefire way to detect the use of ghost with certainty. As indicated above, the lack of unallocated area being fragmented around the drive would certainly be suspicious, but as the system gets used after a ghost image was restored, the system will begin to fragment like before, and so it will depend on the timing of the seizure. I would certainly look at the timestamps on the internal file system files, coupled with the date in the registry as to when the OS was installed to give me an idea of how long that file system and OS have been operating and then compare that with the absence of file fragmentation to help support the idea ghost was used.

Video demonstratinig adding an uncompressed ghost image file (.gho) into EnCase and then manually adding the VBR to have EnCase parse the volume then you can image like any other device.

Saturday, March 1, 2008

Ghost as a forensic tool

If you have not figured it out yet, I read several forensic listservs (great way to learn and kill half my weekend ;) and I often find myself picking a topic I read about on one of the listservs and then blogging about that topic.

So this weeks topic is about using Symantec's Ghost utility as a forensic tool. The ghost utility has been around for many years and is most commonly known for and used by IT professionals to create baseline images of workstations and servers for quick deployment. I doubt that at its inception, that ghost was ever designed to be used as a forensic tool. But somewhere along the way, Symantec added some functionality into the ghost utility to make "forensic" copies of hard drives specifically for law enforcement purposes.

Many years ago I remember going to training and hearing that ghost was an unacceptable tool to use to create a 'forensic' copy as it did not create an 'exact' image and changed a few bytes so you would never get the same hash as the original. I remember performing an exercise and creating a ghost image and comparing the hash values of the original and the ghost image to see that they did not match. As I mentioned, somewhere along the development path of the ghost utility, the ability to make an exact forensic copy was included. The best I can tell, it started with ghost v5.1, circa 1999. From the "Whats new.txt" included with that version:

"-ID (Image Disk) is similar to -IA (Image All), but also
copies the boot track, as above, extended partition
tables, and unpartitioned space on the disk. When looking
at an image made with -ID, you will see the unpartitioned
space and extended partitions in the list of partitions.
The -ID switch is primarily for the use of law enforcement
agencies who require 'forensic' images."

Then in Ghost 2002, the command line switch "-IR" was included:
"-IR The image raw switch copies the entire disk, ignoring the partition table. This is useful when a disk does not contain a partition table in the standard PC format, or you do not want partitions to be realigned to track boundaries on the destination disk. Some operating systems may not be able to access unaligned partitions. Partitions cannot be resized during restore and you need an identical or larger disk."

(ftp://ftp.symantec.com/public/english_us_canada/products/ghost/manuals/ghost2002.pdf)

So what do these command line options do and which is appropriate to use?
I have tested several image files created using the various ghost command line switches and here is a summary:

-ID
This command line option appears by description to create a bit-level image (they call it a sector-by-sector) and it in fact does. If an hard drive image is created using the "ghost -ID" switch, then a bit-level image is created. The problem with this switch comes when you restore this image back out to a hard drive. This switch will cause ghost to adjust the partition boundaries on the destination drive if they are not standard. So for example, if the source HD has 32 sectors per track (SPT) and ghost image is created, when the image is restored back out to a hard drive, ghost will adjust the partition boundaries if the disk geometry is different on the destination drive and make appropriate changes in the partition table. This will obviously result in different hash values being generated. This command line option is configurable in the actual ghost application under the options->Image/Tape tab:



-IR
This command stands for "image raw" and it too makes a bit-level image resulting in an exact duplicate. The difference in this switch is that ghost will ignore the disk geometry on the destination drive when the image is restored and create the image exactly as it was on the source. An image created with the -IR switch will result in the same overall drive hash as the original, ASSUMING it is restored out to a hard drive of the same exact size. This option does NOT appear in the ghost options tab and is a command line switch only.

Ghost (with any switch) DOES not make an image file (.gho extention) that is a raw bitstream image like 'dd' does. A look at a ghost image file in a hex editor will show you that there is a header with information that ghost uses to restore the image correctly and was not on the source drive, typically the first six sectors of the image file. Then the actual bitstream copy of the source drive follows and the footer used by the ghost utility is at the end of the ghost image file. Ghost allows you to compress the image of the source drive when the image is made. This has no effect on the data when it is restored; it only affects the data as it sits in the ghost image file (.gho).

The only appropriate command line option for use when making a forensic image is the "-IR" option. Although not a common forensic tool and often believed to be unacceptable for forensic use, current versions of ghost can make an exact duplicate of a hard drive when the -IR command line option is used.

The only other problem is that there is no easy way to tell which switch was used when the image was created. If you try and look at an image that was created with the -ID or -IR switch with Ghost Explorer, an error message will appear stating that one of those command line options was used, but does not tell you which:



If you look at the details pane when restoring the image, a disk image created using the -IR command line switch will say "RAW DISK IMAGE":



An image created using the -ID command line switch will just show the file system type:



There is also no way to validate the image's integrity. I opened a ghost image file with a hex editor and erased several references to a file and ghost happily restored the image without reporting an error. Since there is no way to generate hash values for blocks or the entire HD source from within ghost, you would have to take a baseline hash BEFORE ghost is used. Then when restored that baseline hash could be compared to the restored drive hash, again using an external tool outside of ghost.

To summarize, the important things to remember if you are using ghost to create an image or if you accept a ghost image are:

If using ghost to create an image:
Create a baseline hash of the source drive before using ghost
Use ghost with the "-IR" command line switch
Make a ghost image of the "disk" not just a partition
Hash the .gho file for reference (convenience)

When accepting a ghost image file:
Ask for documentation on which command line switches were used
Verify via the details pane when restoring the image
Verify it is an image of the entire disk, not just a logical partition
Ask for a baseline source hash from before ghost was used
Verify the restored image hash to the baseline

*Note - all testing and screenshots were done using ghost 2003.

Thursday, February 28, 2008

Create 'dd' image file from EnCase evidence and redact certain files

***Updated verion 1.1 - Sanity checking on deleted, overwritten files, files with invalid clusters and folders

This project started out as a request from a blog reader where they were ordered to provide a copy of an evidence file to another party, but redact certain files. He had already figured out a way to do this with a 3rd party tool, but wanted to dump a text file of the offsets and lengths of the files that were selected so they could be read by a 3rd party tool and then some automated wiping could take place.

Back in July of 2007, I released an EnScript to make a 'dd' image file from an EnCase evidence file (original post is here). I started thinking about how easy it would be to incorporate that feature to that EnScript. An hour later, here is a modified version of the original "export to dd image" EnScript, with the ability to redact selected items.

Basically the way it works is that you load up one piece of evidence and then select any item(s) you want redacted. You can select anything, including unallocated space, which will then be written as all zeros in the 'dd' image file. The selected filename and metadata are all maintained, just the data contents are redacted. Check the console for some logging information.



Now this obviously has some interesting uses, with the most obvious being why I originally set out to make this EnScript, but after working on it and playing around with it, I came up with several other very useful uses, especially when making example evidence files for students. The cool part is you can load up an evidence file, select unallocated, and then when its done, load up the 'dd' image file and then quickly reimage and the resulting evidence file is much smaller since the wiped data is stored as sparse data. So when working with sample evidence files where the pagefile, unallocated or other files are not needed, you can quickly wipe them out and reduce the overall size of the evidence file significantly.

Before (now you see it):



After (now you don't):


All other files remain intact and all other individual file hash values verify between the original and the 'dd' image.

Download Here

Friday, February 22, 2008

Bypassing a Windows login password in order to boot in a virtual machine

I have seen several recent posts on the forensic list servers asking about how to get by a Windows logon password in order to boot an image in a VMware machine. Normally the approach I would take is to extract the SAM and the SYSTEM registry hive and then use a tool to load rainbow tables and try and obtain the password. If there is no Lanman password in the SAM file, this can put a serious dent in my plan because I currently don't have any large NT rainbow tables to use.

As a refresher, Windows 2000/XP/2k3 stores two hashes of a user's password by default. The first is the weak Lanman hash and then the second is the stronger NT hash. It is the Lanman hash that most rainbow tables are built for because of their inherent weakness and how they are stored, they are the quickest to attack. Windows Vista does not store the Lanman hash, only the NT hash. Windows 2000/XP/2003 will only store a Lanman password if the password is < 15 characters. So if a user sets a password greater than 14 characters, no Lanman hash is stored. In addition, there is a security policy that can be changed to stop storing the Lanman hashes even with passwords of any length. Many people use Nordahl's password recovery boot disk to either change or blank out a password. This technique works very well and I have used it many times.

I discovered two additional ways to get around passwords when the passwords are either too difficult for rainbow tables or when there is only a LM password and a brute-force attack will take too long. The techniques I am going to describe will not recover the password. It will merely let you login to the system with a specific user account. Getting access to the system using these techniques will not let you access any files that are protected via EFS in Windows XP or Vista since the password is used as part of the encryption/decryption process.

The first technique is embarrassingly easy and simply involves deleting the SAM file. In seems that in Windows 2000/XP/2003, if you simply delete the SAM file a new one is created upon boot with only the two normal built-in Administrator and Guest accounts, neither with a password. A variation to this is to simply rename the original SAM file and replace it with the one that is in the repair (2000/XP/2K3) or regback (Vista) folder. This copy of the SAM is from when the system was originally installed and the user accounts that were present at that time may not have had any passwords set. You could also try and use the ones from the various Restore Point captures to see if there is a capture of a time when no password was set for a specific user and then use that SAM file to boot.

The second technique simply involves faking Windows out. To do this, you open the SAM file up with a hex editor and find the data that represents the V file for the particular user account in the SAM file that you want access to. There is a data field in that data structure that represents or indicates if a password is set. Normally it have a value of 0x14 if a password is set. Simply change that value to 0x4 and then boot the machine. Even though there is a password hash present, it will login with no password entered. I have used this technique with Windows 2000, XP & Vista and it works well. I have encountered a few instances where the flag that indicated if a password is present (0x14) already had the value of (0x04). I am not sure if the SAM.log or transaction log changed the changes and used values stored elsewhere when the system was running to determine if a password was set, but the above technique will obviously not work in that situation.

In that case, I have been able to take a password hash from another user account that I was able to either crack or is blank and then "transplant it" into the V record of the account I want access to. By doing this, you are effectively swapping the password hash with a know password hash.



In the screenshot above the one-byte highlighted values of 0x14 are the length indicators representing that the lanman is 20 bytes in length and the NT is also 20 bytes in length. To bypass, simply change these to 0x4 with a hex editor. The highlighted data at the bottom of the screenshot is the Lanman and NT password hashes.

I need to do some more testing on all the various versions but to date, this has worked for me everytime without a problem.

An excellent resource for learning the data structure of the SAM file can be found here.

Here is a general guidline on how to determine if a password is set from inside EnCase:

1. A user in Windows has potentially two password hashes that are stored
(only one password, but two different hashes are generated). The lanman
hash and the NTLMv2 hash. Lanman is legacy and not always stored based
on either the local security policy or the domain policy (or registry
hack).

2. The "V" key in the SAM registry hive under the user's SID is where
the password is stored (remember that this is not a straight hash of the
users password, but rather a DES encrypted version, encrypted again
using the SYSKEY).

3. Starting from the very beginning of the V key, sweep 156 bytes. The
next 4 are the offset to where the lanman hash is stored in the V key
relative to a constant offset of 204. So for example if you look at
offset 156 and there is 64 01 00 00, which equals 356 in decimal, you
would add 356 + 204 = 560. Then, make note of the next 4 bytes (starting
at offset 160, which will either be 04 00 00 00, or 14 00 00 00). This
represents the length of the hash that is stored including a 4 byte
header. If the size is "4" then just the header is present and no hash
is stored (i.e. no password). If the length is "14" (20 decimal) then a
hash is stored, as well as the header (4 byte header, 16 byte hash) =
20.

4. Then from the beginning of the V key, go 560 bytes (the sum of the
length + the 204 constant) and the next four bytes are the header (I
have seen 01 00 01 00 & 02 00 01 00 as headers) then the next 16 are the
lanman password hash, if present.

5. For the NTLM hash, start at the beginning of the V file again and
sweep 168 bytes. The next four are the offset to where the NTLM hash is
relative to offset 204 and then the next four are the length. Same rules
apply as above. If the length is "4" then no NTLM hash is stored and
only the header is present. If the length is "14", then there is a 16
byte password hash and a 4 byte header present. You would follow the
same procedure as step 3 above and take whatever value is at offset 168
+ constant of 204 = the offset to where the NTLM hash begins (actually
the header begins, then the hash)."



Good luck!

Friday, February 15, 2008

USB Device History EnScript - Updates

James recently emailed me to tell me that he had taken the USB Device History EnScript and modified it to display additional/different information. So in the spirit of sharing and with the author's consent, here it is:

From James Habben -
The output is now modified to go into the Records Tab of EnCase so you can sort it and it is part of the case bookmarks

Download Here

From Edge -

- Semicolon separation of all information pulled from the registry so
it can be pulled into IDEA, Access or Excel.
- Dump of ENUM\USB including VID, PID, Serial_Num, Device_Description,
Location_Information, Manufacture and ParentIDPrefix.
- Updated ENUM\USBSTOR to also output the date the registry key for
that device was created (i.e. the date the USB device was first plugged
in).
- Added James Habben Records Tab feature.
- Fixed issue with Mounted Devices not producing the serial number for
some fixed drives i.e. C Drive.
- Made Mounted Devices more conducive to Excel/Access/IDEA.
- Fixed my stuff up with Manufacturer and Location_Information
ordering.
- Changed console output format slightly to reflect James's layout.
- Special Thanks to James for fixing an interesting issue I was having
with mountVolume method and for making his source code available.

Download Here

Computer Forensics, Malware Analysis & Digital Investigations

Random Articles