Wednesday, December 15, 2010

EnCase EnScript to merge two hash sets (.hash) into one hash set

Okay, so you can probably tell by the last several posts I am doing a lot of work with hash sets right now. Following up on my previous posts, I had some hash sets from various servers that were created individually, but I later wanted to merge them together. I had written an EnScript to do this a few years ago, but quite honestly I have not used it lately and I noticed that there is a new HashMergeClass in EnCase, so I figured I would try it out.


The good part about the built-in HashMergeClass is that its faster than doing it all manually with an EnScript and it does the binary sorting/de-duping automagically. Anyways, here is a quick EnScript that will prompt for two *.hash files and then merge them together into one .hash file. The resulting merged file is placed into the root of your Hash Set root folder with the name of the two individual .hash files being used as the new filename. For example, if you have two hash sets named:

"Windows XP" and
"Windows 7"

The resulting merged file will be named:

"MERGED_Windows XP_Windows 7.hash".

Download Here

Tuesday, December 14, 2010

EnCase Enterprise EnScript to add application descriptors from selected processes in snapshot data

I was recently helping a company setup and deploy EnCase Enterprise on their network. Part of the initial setup process is to create some baselines of their servers & workstations. I recently posted about creating some quick and dirty hash sets here.

In this case, I needed to create some application descriptors to use as machine profiles in EnCase. I prefer to use regular hash sets when doing analysis because it allows you to identify running processes that are known as well as using them on static files (not running).

App descriptors are exclusively used in EnCase Enterprise/FIM. You *could* technically use them in Forensic/LE edition when you run the scan local machine EnScript, but if you feel you need them on your local machine, then I think you have more to worry about than app descriptors, but knock yourself out. An app descriptor is used to identify running processes, dlls and drivers when collecting snapshot data. If you have hash sets loaded into the library, those will also be compared and displayed if any of the processes match a known/notable hash set. The down side to using hash sets is you cannot use the hash/data in a hash set as part of a machine profile. Machine profiles are used to define what processes are approved or not approved for that particular machine or machine profile (i.e. all the webservers) and that's what an app descriptor is used for.

EnCase Enterprise includes an EnScript to create app descriptors but it involves mounting the remote device and honestly, it can take awhile and I am impatient. So I decided I would write an EnScript that allowed me to check each process from the processes tab under the snapshot data and then quick add it as a app descriptor. Now, you can do this manually by clicking on each one, one at a time. But as I mentioned, I'm impatient as well as having ADD, therefore I wanted a quick way to find all the processes that matched a hash set, select them and then add them as a app descriptor.

The use of this EnScript is pretty straight forward. Select whatever processes under the snapshot->processes tab that you want to add then run the EnScript. The EnScript is "global". This means you can check processes across multiple snapshots (machines) and they will all be added.



It will then prompt you for a folder where you want to place the new app descriptors. You can add a folder by right-clicking on any object in the tree.


If you don't select a folder then the EnScript will terminate without doing anything. If you don't select at least one process from the snapshot->processes tab, then you will receive an error dialog reminding you that you need more coffee need to select at least one process to add as a descriptor.



Creating hash sets from gold builds, trusted hosts and other sources

I had a need today to create several different hash sets of different production machines in a corporate environment. Normally, I would load up a base image or gold build into EnCase or other forensic tool and hash the drive. In this case, I didn't have access to the servers yet so I wrote some instructions and a batch file using md5deep to be given to the IT/admin that were building the machines so they could quickly run the utility and generate hash values of all the files without having to have access (physically or virtually). I could then take the resulting text file and import it into EnCase using an EnScript I previous wrote.

Below is a zip file that contains three files. The md5deep executable, a batch file and a PDF explaining how to use it. The PDF and batch file was written for IT/sysadmin types who may not understand how to use the program and likely won't spend the time trying to figure it out. So I wrote a simple tutorial just to help speed up the process.

I am no expert in batch file programming, but it works for me, so please don't get your panties all in a bunch because my batch file is messy or its not the way you would do it. If you have a better way then edit it and post it in the comments for others.

As a general reminder (disclaimer), the above process should only be done on clean, fresh installs that have been isolated or protected from users (yes, users). Ideally, this should be done on clean installs, then again once they are patched so you capture multiple versions (hash vales)  of files that have changed during the patching process. Then once again after all the user applications, business apps, etc are loaded, but before an average user gets his paws on it.

The zip file is password protected because I was sending it to sysadmins via email and it contains a batch file and executable.

Password is: "dizzle" (without quotes)

Download here

Saturday, December 11, 2010

Computer Forensic Hard Drive Imaging Process Tree with Volatile Data collection

Following up on my previous post, here is an updated decision tree to include volatile data collection as well as a few of the suggestion I received by email/comments.

Click on the image below to view/download a large version, or click here.





As before, the focus of this decision tree is not to list every possible combination of scenarios, but to show some of the basic options that are available and remind examiners about things to think about when imaging. 

Feel free to add comments and suggestions below.

Thursday, December 9, 2010

Computer Forensic Hard Drive Imaging Process Tree for Basic Training

I recently had a need for a simple decision tree for students to grasp and understand some of the options available to them when imaging a hard drive. I put together a simple decision tree and figured others may find it useful. Feel free to make additions or suggestions in the comments.



Wednesday, December 1, 2010

Windows 7 Recycle Bin EnScript

I recently received an email from a friend who I had worked closely with years ago and who I have always considered to be a mentor. Everyday we worked together he would challenge me and make me think about various forensic procedures and come up with innovative solutions. His name is Bruce Pixley and I miss working with him.

Bruce recently had a need to parse out some deleted files that were in the recycle bin of a Windows 7 image, but the corresponding $R files were gone. He restored several of the shadow volume instances and found several of the $I files, but the $R files were not present. He needed a way to parse just the $I index files and build a report.

Bruce ended up writing a simple EnScript to parse selected $I files in the recycle bin of a Vista/7 image. He sent me the EnScript to post as a learning process for others.

/*
Windows 7 Recycle Bin Report (Version: 1.0)
Select $I files found in the Windows 7 $Recycle.Bin folder that you want decoded
Enscript will create a tab-delimited file in the case export folder
Created by: Bruce W. Pixley, CISSP, EnCE
Date: 12/1/2010
*/


You can read the comments inside the EnScript for specific details of how he is parsing the data.

You can download a copy of the EnScript here


Simple example EnScript for learning purposes.

The official Guidance EnScript course uses "Progressive study" examples to show how to build an EnScript that does a specific action. Rather than just showing you a finished EnScript and the code, the idea is to start with the simple "skeleton" or "shell", then build on that piece by piece until it does what you want. In this post, I will follow that same idea and explain an EnScript request I received and then progressively write an EnScript to fit the request.

If you have read any of the previous tutorials I have posted, then you already know the basic principles and syntax, so I will skip those formalities. If you have not read them, then I suggest you click on the tutorial links at the top of the page to learn the basic syntax.

The EnScript request I received requested the following:
Create an EnScript that exports selected files to a export folder with sequential numeric prefix. The EnScript should take all selected files, regardless of where they are in the original image and put them all in one simple folder. The sequential numeric prefix is simply to avoid two files with the same name from overwriting each other. Lastly, create a CSV log that records the original path, MAC dates, extension, logical size and if it is deleted.

Here is the basic skeleton:


class MainClass {
  void Main(CaseClass c) {
  }
}



We obviously need to recurse or process through each entry in the case, so we will use a simple recursion function:

class MainClass {
  void Main(CaseClass c) {
    forall(EntryClass entry in c.EntryRoot()){
    }
  }
}

Next, we will need to check to see which objects the user has selected (blue checked):

class MainClass {
  void Main(CaseClass c) {
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
  }
}

Now we have a basic skeleton to start processing each file that the user has selected. Next, we will need to do some file I/O, so that means we will need to deal with the FileClass objects. We need to create at least three different variables of the FileClass type. One for the entry object we need to open and read, then the second to create a file ont he local file system to write the file the user wants exported, and the third is another local file that will contain our log.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
    
  }
}

Now we need to create a folder inside the default case folder to put the exported files into. This requires another variable of the ConnecitonClass type.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
      }
    }
    
  }
}

This allows us to create a folder inside the default export folder that we will use to put the exported files into. Next we will open the files that the user has selected for reading and then export the file and contents to the local file system, into the folder we just created:

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       local.Open(c.ExportFolder() + "\\Exported Files\\" + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Now we have added three lines that first opens the file that the user selected for reading, then opens a file ont he local file system, in the export folder, then writes the contents of the selected file into the file we created on the local file system. The only problem with this approach is in the case when two files exist with the same name, but in different paths in the original image. When they are exported into the same export folder they will overwrite each other. Therefore, we need to prepend a numeric counter as a prefix to each file that is exported.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(),    
          FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Take note that the "local.Open" statement has now been lengthened to where it wraps to a new line in this blog posting. It would normally all be on the same line, terminated with a semicolon, but this blog automatically wraps long lines. 

Now we have an EnScript that exports selected files to our default export folder and prepends a numeric prefix to each file. We are almost done. We just need to create a log with the associated metadata. To do this, we need to create another file in the local export folder.

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    log.Open(c.ExportFolder() + "\\Exported Files\\log.csv", FileClass::WRITE);
    log.WriteLine("Full_Path,Export_Name,Extension,Created_Date,Last_Written,Last_Accessed," + "Logical_Size,Deleted");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
      }
    }
    
  }
}

Again, take note that the local.Open and log.WriteLine statements above have wrapped on this blog entry. They will work if you copied and pasted them in their wrapped form, but it does not make for very readable code.

Finally, we just need to write the metadata for each file we export into the log file:

class MainClass {
  void Main(CaseClass c) {
    EntryFileClass file();
    LocalFileClass local(), log();
    ConnectionClass conn = LocalMachine;
    conn.CreateFolder(c.ExportFolder() + "\\Exported Files");
    uint mastercounter;
    log.Open(c.ExportFolder() + "\\Exported Files\\log.csv", FileClass::WRITE);
    log.WriteLine("Full_Path,Export_Name,Extension,Created_Date,Last_Written,Last_Accessed,Logical_Size,Deleted");
    
    forall(EntryClass entry in c.EntryRoot()){
      if (entry.IsSelected()){
       file.Open(entry);
       mastercounter++;
       local.Open(c.ExportFolder() + "\\Exported Files\\" + mastercounter + " - " + entry.Name(), FileClass::WRITE);
       local.WriteBuffer(file);
       log.WriteLine(entry.FullPath() + "," + mastercounter + " - " + entry.Name() + "," + entry.Extension() + 
        "," + entry.Created().GetString() + "," + entry.Written().GetString() + "," + 
        entry.Accessed().GetString() + "," + entry.LogicalSize() + "," + entry.IsDeleted() + ",");
      }
    }
    
  }
}

Notice that I have also recorded the new name of the file in the export folder, including the sequential counter for each file we export, that way if two files were named the same, but in different original paths, the reviewer would have the ability to correlate exactly which file is which and which metadata in the log belongs to which file in the export folder.

A completed and functioning version of this EnScript can be downloaded from here. This version has some added error checking that is not discussed above, but it very easy to understand.

Computer Forensics, Malware Analysis & Digital Investigations

Random Articles