Wednesday, December 15, 2010

EnCase EnScript to merge two hash sets (.hash) into one hash set

Okay, so you can probably tell by the last several posts I am doing a lot of work with hash sets right now. Following up on my previous posts, I had some hash sets from various servers that were created individually, but I later wanted to merge them together. I had written an EnScript to do this a few years ago, but quite honestly I have not used it lately and I noticed that there is a new HashMergeClass in EnCase, so I figured I would try it out.


The good part about the built-in HashMergeClass is that its faster than doing it all manually with an EnScript and it does the binary sorting/de-duping automagically. Anyways, here is a quick EnScript that will prompt for two *.hash files and then merge them together into one .hash file. The resulting merged file is placed into the root of your Hash Set root folder with the name of the two individual .hash files being used as the new filename. For example, if you have two hash sets named:

"Windows XP" and
"Windows 7"

The resulting merged file will be named:

"MERGED_Windows XP_Windows 7.hash".

Download Here

Tuesday, December 14, 2010

EnCase Enterprise EnScript to add application descriptors from selected processes in snapshot data

I was recently helping a company setup and deploy EnCase Enterprise on their network. Part of the initial setup process is to create some baselines of their servers & workstations. I recently posted about creating some quick and dirty hash sets here.

In this case, I needed to create some application descriptors to use as machine profiles in EnCase. I prefer to use regular hash sets when doing analysis because it allows you to identify running processes that are known as well as using them on static files (not running).

App descriptors are exclusively used in EnCase Enterprise/FIM. You *could* technically use them in Forensic/LE edition when you run the scan local machine EnScript, but if you feel you need them on your local machine, then I think you have more to worry about than app descriptors, but knock yourself out. An app descriptor is used to identify running processes, dlls and drivers when collecting snapshot data. If you have hash sets loaded into the library, those will also be compared and displayed if any of the processes match a known/notable hash set. The down side to using hash sets is you cannot use the hash/data in a hash set as part of a machine profile. Machine profiles are used to define what processes are approved or not approved for that particular machine or machine profile (i.e. all the webservers) and that's what an app descriptor is used for.

EnCase Enterprise includes an EnScript to create app descriptors but it involves mounting the remote device and honestly, it can take awhile and I am impatient. So I decided I would write an EnScript that allowed me to check each process from the processes tab under the snapshot data and then quick add it as a app descriptor. Now, you can do this manually by clicking on each one, one at a time. But as I mentioned, I'm impatient as well as having ADD, therefore I wanted a quick way to find all the processes that matched a hash set, select them and then add them as a app descriptor.

The use of this EnScript is pretty straight forward. Select whatever processes under the snapshot->processes tab that you want to add then run the EnScript. The EnScript is "global". This means you can check processes across multiple snapshots (machines) and they will all be added.



It will then prompt you for a folder where you want to place the new app descriptors. You can add a folder by right-clicking on any object in the tree.


If you don't select a folder then the EnScript will terminate without doing anything. If you don't select at least one process from the snapshot->processes tab, then you will receive an error dialog reminding you that you need more coffee need to select at least one process to add as a descriptor.



Creating hash sets from gold builds, trusted hosts and other sources

I had a need today to create several different hash sets of different production machines in a corporate environment. Normally, I would load up a base image or gold build into EnCase or other forensic tool and hash the drive. In this case, I didn't have access to the servers yet so I wrote some instructions and a batch file using md5deep to be given to the IT/admin that were building the machines so they could quickly run the utility and generate hash values of all the files without having to have access (physically or virtually). I could then take the resulting text file and import it into EnCase using an EnScript I previous wrote.

Below is a zip file that contains three files. The md5deep executable, a batch file and a PDF explaining how to use it. The PDF and batch file was written for IT/sysadmin types who may not understand how to use the program and likely won't spend the time trying to figure it out. So I wrote a simple tutorial just to help speed up the process.

I am no expert in batch file programming, but it works for me, so please don't get your panties all in a bunch because my batch file is messy or its not the way you would do it. If you have a better way then edit it and post it in the comments for others.

As a general reminder (disclaimer), the above process should only be done on clean, fresh installs that have been isolated or protected from users (yes, users). Ideally, this should be done on clean installs, then again once they are patched so you capture multiple versions (hash vales)  of files that have changed during the patching process. Then once again after all the user applications, business apps, etc are loaded, but before an average user gets his paws on it.

The zip file is password protected because I was sending it to sysadmins via email and it contains a batch file and executable.

Password is: "dizzle" (without quotes)

Download here

Saturday, December 11, 2010

Computer Forensic Hard Drive Imaging Process Tree with Volatile Data collection

Following up on my previous post, here is an updated decision tree to include volatile data collection as well as a few of the suggestion I received by email/comments.

Click on the image below to view/download a large version, or click here.





As before, the focus of this decision tree is not to list every possible combination of scenarios, but to show some of the basic options that are available and remind examiners about things to think about when imaging. 

Feel free to add comments and suggestions below.

Thursday, December 9, 2010

Computer Forensic Hard Drive Imaging Process Tree for Basic Training

I recently had a need for a simple decision tree for students to grasp and understand some of the options available to them when imaging a hard drive. I put together a simple decision tree and figured others may find it useful. Feel free to make additions or suggestions in the comments.



Wednesday, December 1, 2010

Windows 7 Recycle Bin EnScript

I recently received an email from a friend who I had worked closely with years ago and who I have always considered to be a mentor. Everyday we worked together he would challenge me and make me think about various forensic procedures and come up with innovative solutions. His name is Bruce Pixley and I miss working with him.

Bruce recently had a need to parse out some deleted files that were in the recycle bin of a Windows 7 image, but the corresponding $R files were gone. He restored several of the shadow volume instances and found several of the $I files, but the $R files were not present. He needed a way to parse just the $I index files and build a report.

Bruce ended up writing a simple EnScript to parse selected $I files in the recycle bin of a Vista/7 image. He sent me the EnScript to post as a learning process for others.

/*
Windows 7 Recycle Bin Report (Version: 1.0)
Select $I files found in the Windows 7 $Recycle.Bin folder that you want decoded
Enscript will create a tab-delimited file in the case export folder
Created by: Bruce W. Pixley, CISSP, EnCE
Date: 12/1/2010
*/


You can read the comments inside the EnScript for specific details of how he is parsing the data.

You can download a copy of the EnScript here


Simple example EnScript for learning purposes.

The official Guidance EnScript course uses "Progressive study" examples to show how to build an EnScript that does a specific action. Rather than just showing you a finished EnScript and the code, the idea is to start with the simple "skeleton" or "shell", then build on that piece by piece until it does what you want. In this post, I will follow that same idea and explain an EnScript request I received and then progressively write an EnScript to fit the request.

If you have read any of the previous tutorials I have posted, then you already know the basic principles and syntax, so I will skip those formalities. If you have not read them, then I suggest you click on the tutorial links at the top of the page to learn the basic syntax.

The EnScript request I received requested the following:
Create an EnScript that exports selected files to a export folder with sequential numeric prefix. The EnScript should take all selected files, regardless of where they are in the original image and put them all in one simple folder. The sequential numeric prefix is simply to avoid two files with the same name from overwriting each other. Lastly, create a CSV log that records the original path, MAC dates, extension, logical size and if it is deleted.

Here is the basic skeleton:


class MainClass {
  void Main(CaseClass c) {
  }
}



We obviously need to recurse or process through each entry in the case, so we will use a simple recursion function:

class MainClass {
  void Main(CaseClass c) {
    forall(EntryClass entry in c.EntryRoot()){
    }
  }
}

Next, we will need to check to see which objects the user has selected (blue checked):

class MainClass {
  void Main(CaseClass c) {
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
  }
}

Now we have a basic skeleton to start processing each file that the user has selected. Next, we will need to do some file I/O, so that means we will need to deal with the FileClass objects. We need to create at least three different variables of the FileClass type. One for the entry object we need to open and read, then the second to create a file ont he local file system to write the file the user wants exported, and the third is another local file that will contain our log.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
    
  }
}

Now we need to create a folder inside the default case folder to put the exported files into. This requires another variable of the ConnecitonClass type.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
    
  }
}

This allows us to create a folder inside the default export folder that we will use to put the exported files into. Next we will open the files that the user has selected for reading and then export the file and contents to the local file system, into the folder we just created:

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       local.Open(c.ExportFolder() + "\\Exported Files\\" + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Now we have added three lines that first opens the file that the user selected for reading, then opens a file ont he local file system, in the export folder, then writes the contents of the selected file into the file we created on the local file system. The only problem with this approach is in the case when two files exist with the same name, but in different paths in the original image. When they are exported into the same export folder they will overwrite each other. Therefore, we need to prepend a numeric counter as a prefix to each file that is exported.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(),    
          FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Take note that the "local.Open" statement has now been lengthened to where it wraps to a new line in this blog posting. It would normally all be on the same line, terminated with a semicolon, but this blog automatically wraps long lines. 

Now we have an EnScript that exports selected files to our default export folder and prepends a numeric prefix to each file. We are almost done. We just need to create a log with the associated metadata. To do this, we need to create another file in the local export folder.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    log.Open(c.ExportFolder() + "\\Exported Files\\log.csv", FileClass::WRITE);
    log.WriteLine("Full_Path,Export_Name,Extension,Created_Date,Last_Written,Last_Accessed," + "Logical_Size,Deleted");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Again, take note that the local.Open and log.WriteLine statements above have wrapped on this blog entry. They will work if you copied and pasted them in their wrapped form, but it does not make for very readable code.

Finally, we just need to write the metadata for each file we export into the log file:

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    log.Open(c.ExportFolder() + "\\Exported Files\\log.csv", FileClass::WRITE);
    log.WriteLine("Full_Path,Export_Name,Extension,Created_Date,Last_Written,Last_Accessed,Logical_Size,Deleted");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
       log.WriteLine(entry.FullPath() + "," + mastercounter + " - " + entry.Name() + "," + entry.Extension() + 
        "," + entry.Created().GetString() + "," + entry.Written().GetString() + "," + 
        entry.Accessed().GetString() + "," + entry.LogicalSize() + "," + entry.IsDeleted() + ",");
      }
    }
    
  }
}

Notice that I have also recorded the new name of the file in the export folder, including the sequential counter for each file we export, that way if two files were named the same, but in different original paths, the reviewer would have the ability to correlate exactly which file is which and which metadata in the log belongs to which file in the export folder.

A completed and functioning version of this EnScript can be downloaded from here. This version has some added error checking that is not discussed above, but it very easy to understand.

Sunday, November 14, 2010

EnCase filter that uses MSSQL for faster filtering of files by hash values

OIiver Höpli from Switzerland recently emailed me to provide a filter he wrote that may be very useful to some EnCase users. With his permission, I am posting the filter and the description provided by him.


"The script is similar to the "Unique Files by Hash" filter provided by Guidance.
Because the script uses an MSSQL server for storing the hashes and not a NameListClass, it is much faster. In tests it filters about 220,000 entries in 3 minutes. Also the displayed filter applying time is really close to the total time that the filter would actualy run.


To use this script, you have to had a running insance of MSSQL Server local or in your network.
Please use credentials with enough permission to create and modify databases and tables.

The filter creates a table per dongle ID. So you could use this filter simultaneously on different EnCase installations in your lab. Please do not run the filter simultaneously on 2 or more EnCase instances on the same examiner machine.

The express edition of the MSSQL Server 2008 R2 (free available) could be downloaded from:
http://www.microsoft.com/germany/express/products/database.aspx"


Download here

EnCase filter that uses MSSQL for faster filtering of files by hash values

OIiver Höpli from Switzerland recently emailed me to provide a filter he wrote that may be very useful to some EnCase users. With his permission, I am posting the filter and the description provided by him.



"The script is similar to the "Unique Files by Hash" filter provided by Guidance.
Because the script uses an MSSQL server for storing the hashes and not a NameListClass, it is much faster. In tests it filters about 220,000 entries in 3 minutes. Also the displayed filter applying time is really close to the total time that the filter would actualy run.


To use this script, you have to had a running insance of MSSQL Server local or in your network.
Please use credentials with enough permission to create and modify databases and tables.

The filter creates a table per dongle ID. So you could use this filter simultaneously on different EnCase installations in your lab. Please do not run the filter simultaneously on 2 or more EnCase instances on the same examiner machine.

The express edition of the MSSQL Server 2008 R2 (free available) could be downloaded from:

http://www.microsoft.com/germany/express/products/database.aspx"


Download here

Sunday, October 3, 2010

Forensic analysis of "Frozen" hard drive using Deep Freeze

First, I apologize for the delay in updating the blog. It has been two months since my last post, it seems like it has gone by so fast. Since my last post I have essentially circumnavigated the world working on various projects from Bangkok to the US to South America to Malaysia and back to Bangkok.

One of the recent topics that came up that I felt was worth sharing was a forensic analysis of a computer using Deep Freeze. Deep Freeze is a tool produced by Faronics that is used by many organizations to maintain a installation of Windows at a defined state. It is also commonly used by Internet cafes and other public Internet locations to help protect privacy. If you are unfamiliar with it, it basically takes a "snapshot of the hard drive(s) and then lets the user install, create, change or modify the system at will, but then when the system is rebooted, it goes back to its original "state".



Over the past few years I have been approached by several organizations asking about deep freeze and how to do forensics on a machine that has it installed. I have also spoken to several examiners who have said "well, deep freeze is installed so there is no use doing a forensic exam".........FAIL... epic fail.

This review is by no means a comprehensive analysis. It is a summary of my findings and should serve as enough information to get a person started when thinking about examining a computer that has Deep Freeze installed. Deep Freeze uses a kernel level driver, as described here, to redirect the data being written when the drive is being protected to an area that the Deep Freeze program controls. When the computer reboots, any files or data that was created in the previous sessions is gone.... or is it?

Essentially, this program takes a  large chunk of unallocated an uses it to store the data that is created or changed during the session when the drive is "frozen". When the computer is rebooted all the file system records (MFT for NTFS or Allocation table for FAT) that were previously created "disappear".

In reality what this means is that the data is still out there in unallocated. Essentially, the data is in the same state as a newly formatted drive.  The file system tracking system (MFT or FAT) no longer has any knowledge of the data, but its still sitting there in unallocated. Better yet, with NTFS, the MFT record is also out there in unallocated, with all the necessary information to reconstruct any file, even heavily fragmented files.

Even better, Deep Freeze tends to use very high cluster numbers in unallocated. This means that data that is written during the frozen state, ends up near then end of the partition. Recently, when examining a drive using Deep Freeze, a quick search of MFT records in unallocated revealed 40,000+ hits in high cluster numbers.

MFT Record of GIF file that was created several "sessions" earlier.


Data run of MFT record


GIF data is still intact on disk.

Old Internet history can easily be found by searching for Internet history and then check the "comprehensive search" option in EnCase. This will cause EnCase to search unallocated for Internet history records.

Data that is created during the "frozen" state obviously becomes fragile after a reboot since the data is now stored in unallocated like any other type of deleted data. Quick seizure and review is key to recovering the most possible amount of information after the system has been rebooted, but like all data that gets deleted, it depends on the amount of usage after the reboot.

For a really interesting perspective, view a live machine that is running Deep Freeze using a forensic tool that can view the logical device and also the physical device and compare some sectors in unallocated. 

One of the the easiest "recovery" solutions that comes to mind when dealing with a drive utilizing Deep Freeze, would be to use an EnScript (you didn't think you were going to get through this post with me using that word did you??) to parse out all the MFT records in unallocated. Then, get the data runs from the MFT record and piece the files back together. You could even correlate each cluster as you are rebuilding to see if it is currently allocated to indicate if part of the file had been overwritten in certain areas.

Good luck.

Saturday, July 31, 2010

EnScript Programming Course in Melbourne, Australia

I just finished a second week of EnScript training in Melbourne, Australia with an Australian training partner named Invest-e-gate (website down for remodel at the moment). The founder of the company and I used to work together at Guidance a lifetime ago, but it was great to see him again and to find that he is staying on the cutting edge of things, just like usual.

It was a great group of students, very committed and interested in taking the use of EnCase to the next level through automation and getting some results and configurability that you can't get through the canned version of EnCase. It was amazing to spend two weeks in Australia in two separate cities but to have back-to-back classes with such committed and smart students. All of the students were able to get through the formal lessons quickly, so we spent a lot of extra time developing personal projects and ideas.

Many students had several great ideas on how to use the EnScript features, including sending lots of data inside of EnCase to a database and collecting the data from several different examiners. Some of the other ideas put to use by the students was to use EnScript to help categorize bookmarks (images) for quick triage.

If you rely on hashing a lot, a idea for thought is using the power of EnScript to *quickly* hash a small portion of each file (say 20 bytes) and then create a sub-set of "possible" matches. Then you can invest the time and effort to hashing the entire file programmatically. This can dramatically reduce the amount of time you spend doing hash comparisons, since the process of generating a hash is the time consuming part. Generating a hash of a small area of the file is way quicker and lets you reduce the pool of possible entries that may match your full-sized hash values and therefore lets you spend a lot less time generating hashes on all the files.

If you do hash analysis a lot, you can also reduce your hash comparison speed if you start collecting hash values and file sizes of each file you want to identify (NIST hash sets have file size). You can then use an EnScript to first look for files that are the same size as the ones in your list, then only hash those, since if the file size if different, the hash will have to be different and therefore there is no need to invest the time or computing power to hash it.

We also spent some time discussing entropy and how to use that programmatically to identify files with different hash values, but with similar content, as well as using it to help find malware. Great stuff!

Here are a few cheesy shots from class in Melbourne.





Anyway, I am now back in Bangkok and I am planning on doing another EnScript course at the beginning of the year, hopefully in the Netherlands. If anyone else is interested in hosting a class, please let me know and we will see if we can pull off a course in your area. Meanwhile, back to work and hopefully some new upcoming blog posts & practicals!

If you are in and around Australia and need forensic training, I highly recommend invest-e-gate. State of the art training facilities and like I mentioned, they are doing some cutting-edge stuff in all areas of security & forensics.






Invest-e-gate Pty Ltd
+61 3 9016 4451
www.invest-e-gate.com

Friday, July 23, 2010

EnScript Programming Course in Sydney

It has been several weeks since my last post and I have been fairly busy, but I thought I would post a quick update.

I just finished an EnScript Programming course in Sydney, Australia. I have to say, the students who attended the course were very sharp. All of them immediately began to come up with ideas and ways to use EnScripts in their workloads.

A couple of ideas that came from the students were using an EnScript to parse through all the archive files and extract all the user-defined file-types, such as JPGs, GIFs & PNGs from inside the archives and then create a new LEF with just those files. THe thought process was sometimes the image you are examining has a lot of archive files and mounting them all at once is a memory/resource issue. By putting them all in a LEF, EnCase does not need to virtually reconstruct the archive in memory, so its less of a resource problem.

Another idea was using an EnScript to access the Document view in EnCase and extract embedded graphics in office docs and other document types and then be able to export or collect those images separately to be able to quickly see the images that are embedded within files, without having to read the docs.

Brian Jones has come up with several EnScripts that have been posted to the Guidance support portal, you should check them out.

It was a great class and great students, very inspiring to see people coming up with new ideas to leverage the power of EnScripts. I am now off to Melbourne to teach another EnScript class there.

Here are a few pictures from class:








Friday, June 4, 2010

Forensic Practical Exercise #4

I have previously posted a couple different practical exercises here for people to work through and practice. You can see the previous ones here: Practical #1, Practical #2, Practical #3.

This exercise is going to be a little more theoretic because I cannot share the data that I have and I have no ability to make additional data for sharing.

So here is the scenario (BTW, it's a real scenario). Local police detectives have responded to the scene of a homicide. During their investigation they have discovered that there is a CCTV system that may have caught the entire event on video. Being conscious of preserving the data, they called the security company responsible for installing the CCTV system, who promptly responded and shut down the CCTV system. The technician pulled the hard drive out and gave it to the detectives, who has now given it to you with one simple request: "find the evidence". They want you to extract the videos so they can review them to see if it is useful in helping solve the case. Sounds simple eh?

Being the energetic examiner that you are, you quickly image the hard drive and begin an initial analysis. Once imaged, you load the image into EnCase and see a single 100GB FAT32 volume containing hundreds of files in the root directory of the volume. There are no subdirectories (other than some file system generated directories that contain no data). Information about the volume looks like this:


The files in the root directory look like this:


The video data from each day is recorded and stored in one or multiple files depending on the amount of data recorded. Each file has the extension of "XBA". The file header looks like this:


You then export several files out to your local working drive and attempt to view them using a freely available video viewer. Each attempt to view fails and the viewer reports the file is corrupted. A quick look at the exported files show they are each 32,768 bytes in length, even though EnCase reports a different size for each file you exported.

Ideas?..........Let the questions begin... please use the comment function below so everyone can benefit from questions and answers already given.

Wednesday, May 26, 2010

EnScript to parse TIFF Metadata

An investigator contacted me this week about an investigation involving several hundred TIFF files that had been generated from a fax machine. The investigator had a need to quickly extract all the metadata out of the TIFF files. A couple different external programs could be used to do this, for example, ExifTool by Phil Harvey.

My goal was to create a quick EnScript to parse the TIFFs and provide the data without having to export the files out of EnCase. This caused me to take a closer look at TIFF format and the associated metadata that is stored inside. TIFF files are pretty common, especially in an environment where scanned documents can be found. This would include the processing stage of some e-Discovery jobs.

The TIFF file format is well documented. It can be found here. There are several fields inside a TIFF file that may be valuable, specifically a TIF that was generated as part of a fax transmission process.  A TIFF file that is generated from a fax, is commonly refereed to as a TIFF-F or a bilevel TIFF. There is an excellent discussion about the TIFF-F format in RFC 2306.

There are several tags (fields) that are commonly associated with a TIFF fax file that may be useful. The ones I have identified and included so far are:

Image File Width
Image File Length
Compression (identifies it clearly as a fax)

Image Description
Page Numbers
Software
Date/Time

There are additional standard TIFF tags, such as data offsets, resolution, resolution unit, etc., but they don't have much value from an investigative standpoint. In addition to the standard TIFF tags, there can also be non-standard custom tags that are added by additional software, such as the Microsoft Document Imager (MDI) that is commonly used when a Windows OS computer is used as a fax.

When the MDI tags are present, there can additional information that can be useful to the investigator. For example:

Title
Author (Windows user account)
Last Saved by (Windows user account)
Last Edit Timestamp
Last Print Time Timestamp
Create Date Timestamp
Last Saved Timestamp
Page count
Word count
Char count
....and several others...

These tags are basically the same ones that you typically find in a Microsoft Office OLE file (doc, xls ppt). 

The EnScript below will parse out all the standard TIF tags mentioned above. In addition, if there is OLE information, it will currently parse out the document name, author & last saved by name. I am still working on parsing some of the other MDI tags, but I don't have many sample TIF files that have this MDI information. If you have access to any TIF files that contain MDI information and are willing to share them, please contact me at lance(at)forensickb.com.

Meanwhile, you can run the EnScript below against any selected TIF files in EnCase and it will bookmark the tag fields mentioned above, as well as print out the metadata information to the console tab. 


If there is MDI information, those fields that are currently being parsed will appear in the bookmarks as well as the message "There is MDI information present" in the console:



Please contact me if you have any TIF files that contain MDI information so I can continue to develop the EnScript to parse the additional pertinent fields.


Wednesday, May 19, 2010

EnScript to find and parse "vk" registry keys

Earlier today I posted an EnScript that parses the 'nk' registry records from any selected files in EnCase. You can read about that EnScript in the original post here.

This EnScript essentially does the same basic function, except it searches for 'vk' records, which are the records that hold data values. The registry hive holds different types of data in different records. A 'vk' record can have the data value "resident" inside the 'vk' record itself, or it can be "non-resident" and have its own record elsewhere in the registry hive.

Therefore, when searching for 'vk' records, it is common to find the record, but it either has no data value name and/or data value inside that specific record.  Using the same example I used in the previous post about 'nk' records, here is an example:


The "MountedDevices" folder (key) is a 'nk' record that we covered in the previous post. The data values are on the right side of the window. The data value names are things like "\DosDevices\C:", "\DosDevices\D:", etc. The data value itself is the value thats stored inside that specific data name entry. For instance the data name  "\DosDevices\C:" would commonly have a value similar to what you see here:



The value data inside the data value name is the hex values you see above.

This EnScript attempts to find 'vk' records and then parses them as best as possible. As mentioned, it is common for the actual data value to be stored elsewhere and therefore cannot be parsed. If the data value is smaller than 4 bytes, then it is stored within the 'vk' record along with the value name. Here is an example of the output of the EnScript:



In the screenshot above, I searched the pagefile. The value names can bee see in the comment field on the right. After the value name, the data value itself is displayed if it was resident to that 'vk' record.  You can see several bookmarks that have a value name, but no value itself. This is because the value was not resident to that record and is stored elsewhere.  Some value names are blank and therefore you will see the name "default" (as you would typically see in regedit or other registry viewer).

This EnScript only bookmarks the data.

Download Here

EnScript to find and parse "nk" registry keys

There has been a lot of postings lately in the forensic community about the value and information that can be gleaned from an orphaned 'nk' registry record that may exist in unallocated space. The 'nk' record holds the name of a registry key, i.e. the name of the folder when viewed in regedit or other registry viewing tools. It does not contain the data value itself.

Here is an example:


The "MountedDevices" item on the left side is the key (aka 'nk' record). The data values inside that key (folder) are 'vk' records (blue highlighted items on the right) and are not parsed by this EnScript. 

This EnScript can be used to search any selected (blue checked) file in EnCase. Commonly that would be  Unallocated, pagefile and active registry hives to find deleted keys. When you run the EnScript, you will be presented with date range fields. The 'nk' records (keys) are what have the last modified time stamp, so if you are looking for activity during a specific date range you can narrow or broaden the hits that are found by entering whatever dates/times you want in these fields.


Once you enter dates & times, press "OK" and all the selected files will be searched for a 'nk' record. Once found, the EnScript will try and validate it as a valid key, then bookmark it. It will also indicate if it is a deleted key as opposed to a valid "in use" key. It is common to find "in use" keys in unallocated and the pagefile as they are moved around and swapped out of memory, but that does not mean they are still "in-use".

The EnScript will create a bookmark folder with the timestamp of the current time that you ran the EnScript, along with the range you chose:


The comment field will have the date in a numerical UNIX format so you can accurately sort this column by date. After the date will be the name of the key found and then if it is deleted.

The EnScript will also print some basic info to the console tab:

Case 1\C\Unallocated Clusters
-------------------------------------------------------------------------------
07/14/09 09:12:04AM    win32
07/14/09 09:12:04AM    FLAGS
07/14/09 09:12:04AM    HELPDIR
07/14/09 09:11:24AM    notepad.exe
07/14/09 09:11:24AM    command
09/12/09 06:24:08PM    OpenWithList
07/14/09 09:11:24AM    notepad.exe
Timestamp out of range, skipping....
07/14/09 09:11:24AM    .log
07/14/09 09:11:24AM    .scp
07/14/09 09:11:24AM    ShellNew
07/14/09 09:11:24AM    inifile
07/14/09 09:11:24AM    DefaultIcon
07/14/09 09:11:24AM    shell
07/14/09 09:11:24AM    open

Hits that are outside the range you specified will not be bookmarked and you will see "Timestamp out of range, skipping...." in the console when a record is skipped.



SafeBoot Info EnScript

An old colleague of mine, Brian Olson, contacted me and offerred to share an EnScript that he wrote. The EnScript was designed to help those of you who may have SafeBoot encryption deployed in your organization.

Here is a description of the EnScript directly from Brian:
The SafeBoot Management Console generally associates the key with the asset, we have encountered several situations where we could not easily locate the correct key to decrypt a SafeBoot encrypted drive. In some cases we found computers where the hard drive was swapped between assets by our internal helpdesk technicians, multiple decryption keys existed for the same asset, or even worse - keys were renamed.

I (Brian) wrote this EnScript to assist an Examiner with identifying the correct SafeBoot .SDB Database File (Decryption Key) based on meta data stored by SafeBoot to the hard drive. This EnScript will provide the Examiner with a brief report with enough information to locate the correct SafeBoot Database and Object information by searching for the Machine ID. From there, the .SDB key can be exported and used to decrypt the volume from within EnCase or using SafeBoot Vendor Tools.

Example Report:
SafeBoot Information
Physical Device: 0
SafeBoot Signature found in Device '0' Sector 1.
SafeBoot Encryption Information
SafeBoot Alg: 00000012
Database ID: 1234ABCD
Machine ID: 000012AB
SBFS Sector Map: 1668231
SBFS Sector Map Count: 23
SBFS KeyCheck: 123456ABCDEF
Region 1 Information
Region 1 - Start Sector: 63
Region 1 - End Sector: 156296385
Region 1 - Sector Count: 156296322
PowerFail Status
Status: Inactive

This EnScript is still in Beta, but has been mostly reliable in our environment. I (Brian) would appreciate any feedback from any other SafeBoot users regarding the accuracy of this EnScript in their environments.

Some Known Issues include:

  • Currently identifies only one “Region” (Encrypted Volume). Multiple Region Support is a planned feature.
  • Power Failure State still needs further testing and improvements. May still report Inactive...
  • ‘End Sector’ Region may be “0” on McAfee Enterprise Encrypted Disks.
Bugs or Comments can be reported to Brian at: dbrianolson (at) gmail.com

Download Here

Friday, May 7, 2010

Guidance Software releases "WinAcq", a command line acquisition tool in EnCase v6.16

For those of you who read the "New Features" section in the help file, this may be old news, but the latest release of EnCase now has a command line acquisition tool called "WinAcq".


This tool is designed to run from the command line in the Windows Operating System  to acquire whatever physical or logical drive you specify. The utility can be run interactively, where it prompts for certain information before it executes the acquisition, or it can be run from the command line with all the options specified on the command line. Additionally, you can also create a config file that contains all the config settings that you want the utility to use. The latter two allow for batch or scripted operation, i.e. on a flash drive or bootable CD.


Luiz Rabelo posted a very good article on his blog about all the command line options and even did some videos. His page is in Portuguese but you can view it in English here with the help of Google.

Sunday, March 21, 2010

EnCase Portable device - Review

This blog post is a review of the EnCase portable device. I have had the chance to use the EnCase portable device for several months now, starting with the initial version that was released, but I finally got a chance to sit down and write a review. The current EnCase Portable version that is publicly available is v1.2.1, which was released November 2009.

The EnCase Portable kit consists of a small carrying kit, a HASP security key, the EnCase portable USB device (black) and a 16gb flash device that is used for the collected data (blue) and a IOGear USB hub:


The EnCase Portable device was released about a year ago and is designed to be deployed on a subject's computer to collect a predetermined set or types of files. The device works in one of two ways:

1. You can use the source processor EnScript to chose a predefined collection job (i.e. collect all documents) and then load the USB with this job. When the USB device is run on the target computer, the job is executed and the predetermined type of files are collected.

2. The second method is to insert the USB device on the target computer and choose the pre-determined job at the time of triage.

Method one would be for giving the device to someone who does not know much about EnCase or does not need to interact with the collection process whatsoever. Method two would be for someone with average knowledge of EnCase and could decide what types of files need to be collected at the time of collection in the field.

In addition to the two collection methods described above, the USB device can be used in one of three ways to perform the collection:

1. For computers that support booting from a USB device, you can insert the black USB EnCase portable device and boot directly to an operating system installed on the USB (BartPE-ish).

2. For computers that don't support booting to USB devices (older computers or BIOS is locked down), then you can boot from an included CD-ROM that contains a stand alone operating system and the necessary EnCase program.

3. You can insert the USB on a running device and execute the EnCase portable process directly from the USB while the computer is running.

The EnCase security key must also be connected to the target machine during the time of the collection. There are also three choices for storing the collected data:

1. You can store the collected data on the actual EnCase portable device itself. It is a 4GB flash device, so space is somewhat limited if your collection may contain a large number of files or large amounts of data.


2. You can use the included Kingston 16GB flash device (these devices are horribly slow).



3. You can use your own external USB device such as an external USB hard drive.

If you chose option #3, there is aVB script that is included on the Encase portable device that is intended to be used to prepare your own external USB storage device. There is nothing special about the preperation process, other than a certain folder path must be present. If it is not present. The EnCase portable program will ignore the external storage device and attempt to store the collected data on the EnCase portable device itself.  There is no limitation to the file system of the external storage device and it can be anything Windows can read & write to.

I highly recommend using option #3. If you have this device and are going to be doing collections, get yourself a high-quality large external USB storage device that is USB bus powered, i.e. 7200rpm Tri-Interface (USB/1394a/194b) hard drive (The EnCase portable kit does come with a power supply for the USB hub, which is not pictured above).

When you run the portable EnScript (EnPack), the following menu is displayed:


The lower portion of the window lists the pre-defined collection jobs. The only job not shown is "Create PII Report" which is available if you scroll down. Highlighting a job and clicking "Run Job" starts the collection of those types of files.

Unfortunately, there is no way to see what "Collect Documents Files" entails from here. You either have to know what kind of files that collection job includes from some type of external documentation or have run the job before to know what kinds of files it will collect. The same is true ofr all these jobs types. There is no way to see what "Picture files" entails. I can assure you, all the jobs are comprehensive in the types of files it collects, but there is no way to focus only on certain types of files, such as only .JPG or only .DOCX extensions. These jobs are statically defined and cannot be edited or changed.

Having experience in writing EnScripts, I see many ways to write some custom EnScripts that can be used on this device to collect or filter on any type of criteria. In other words, it would be simple to create a condition type interface that would let you select the types of files to be collected based on metadata, i.e. name, path, size, dates, etc. You could also include the ability to perform keyword searches to define which files to collect, which is not available in this version. The above described functionality is supposed to be available in the next release (v2.0), but I have not yet seen it.

I will mention that the EnCase security key is somewhat limited in that it is designed to only be used with the EnCase portable process. You cannot use this dongle to perform analysis of the collected data. Using EnCase with this security key will report "EnCase Forensic" on the Window title, but it will not display the structure of a loaded local devices and will report "None of the selected devices are available" if you try to load a standard evidence file. It is designed to be for collection only. I assume you could buy this product and get a cert file that is associated with your current EnCase dongle, but I don't really see an advantage.

If you have one of these devices, please feel free to comment below with your experience in using EnCase portable .

If you have one of these devices and want to try a custom built EnScript to collect data, please feel free to email me.

Computer Forensics, Malware Analysis & Digital Investigations

Random Articles