Wednesday 18 December 2013

Monkey Vs Python = Template Based Data Extraction Python Script



There seems to be 2 steps to forensically reverse engineering a file format:
- Figuring out how the data is structured
- Extracting that data for subsequent presentation

The dextract.py script is supposed to help out between the two stages. Obviously, I was battling to come up with a catchy script name ("dextract" = data extract). Meh ...

The motivation for this script came when I was trying to reverse engineer an SMS Inbox binary file format and really didn't want to write a separate data extraction script for every subsequent file format. I also wanted to have a wrestle with Python so this seemed like as good an opportunity as any.

Anyhoo, while 9 out of 10 masochists agree that reverse engineering file formats can be fun, I thought why not save some coding time and have one configurable extraction script that can handle a bunch of different file formats.
This lead me to the concept of a "template definition" file. This means one script (with different templates) could extract/print data from several different file types.
Some quick Googling showed that the templating concept has already been widely used in various commercial hex editors
http://sandersonforensics.com/forum/content.php?119-RevEnge
http://www.x-ways.net/winhex/index-m.html
http://www.sweetscape.com/010editor/
http://www.hexworkshop.com/


Nevertheless, I figured an open source template based script that extracts/prints data might still prove useful to my fellow frugal forensicators - especially if it could extract/interpret/output data to a Tabbed Separated (TSV) file for subsequent presentation.
It is hoped that dextract.py will save analysts from writing customized code and also allow them to share their template files so that others don't have to re-do the reverse engineering. It has been developed and tested (somewhat) on SIFT v2.14 running Python 2.6.4. There may still be some bugs in it so please let me know if you're lucky/unlucky enough to find some.

You can get a copy of the dextract.py script and an example dextract.def template definition file from my Google code page here.
But before we begin, Special Thanks to Mari DeGrazia (@maridegrazia) and Eric Zimmerman (@EricRZimmerman) for their feedback/encouragement during this project. When Monkey starts flinging crap ideas around, he surely tests the patience of all those unlucky enough to be in the vicinity.

So here's how it works:

Everyone LOVES a good data extraction!


Given a list of field types in a template definition file, dextract.py will extract/interpret/print the respective field values starting from a given file offset (defaults to beginning of the file).
After it has iterated through each field in the template definition file once, it assumes the data structure/template repeats until the end offset (defaults to end of file) and the script iterates repeatedly until then.
Additionally, by declaring selected timestamp fields in the template definition, the script will interpret the hex values and print them in a human readable ISO format (YYYY-MM-DDThh:mm:ss).

To make things clearer, here's a fictional SMS Inbox file example ... Apparently, Muppets love drunk SMS-ing their ex-partners. Who knew?
So here's the raw data file ("test-sms-inbox.bin") as seen by WinHex:

"test-sms-inbox.bin"


OK, now say that we have determined that the SMS Inbox file is comprised of distinct data records with each record looking like:

Suspected "test-sms-inbox.bin" record structure

Observant monkeys will notice the field marked with the red X. For the purposes of this demo, the X indicates that we suspect that field is the "message read" flag but we're not 100% sure. Consequently, we don't want to clutter our output with the data from this field and need a way of suppressing this output. More on this later ...

And now we're ready to start messing with the script ...

Defining the template file

The template file lists each of the data fields on a seperate line.
There are 3 column attributes for each line.
  • The "field_name" is a unique placeholder for whatever the analyst wants to call the field. It must be unique or you will get funky results.
  • The "num_types" field is used to specify the number of "types". This should usually be set to 1 except for strings. For strings, the "num_types" field corresponds to the number of bytes in the string. You can set it to 0 if unknown and the script will extract from the given offset until it reaches a NULL character. Or you can also set it to a previously declared "field_name" (eg "msgsize") and the script will use the value it extracted for that previous "field_name" as the size of the string.
  • The "type" field defines how the script interprets the data. It can also indicate endianness for certain types via the "<" (LE) or ">" (BE) characters at the start of the type.

Here's the contents of our SMS Inbox definition file (called "dextract.def").
Note: comment lines begin with "#" and are ignored by the script.

# Note: Columns MUST be seperated by " | " (spaces included)
# field_name | num_types | type
contactname | 0 | s
phone | 7 | s
msgsize | 1 | B
msg | msgsize | s
readflag | 1 | x
timestamp | 1 | >unix32

So we can see that a record consists of a "contactname" (null terminated string), "phone" (7 byte string), "msgsize" (1 byte integer), "msg" (string of "msgsize" bytes), "readflag" (1 byte whose output will be ignored/skipped) and "timestamp" (Big Endian 4 byte No. of seconds since Unix epoch).

Remember that "readflag" field we weren't sure about extracting earlier?
By defining it as a type "x" we can tell the script to skip processing those "Don't care" bytes.
So if you haven't reverse engineered every field (sleep is for chumps!), you can still extract the fields that you have figured out without any unnecessary clutter.

Running the script

Typing the scriptname without arguments will print the usage help.  
Note: I used the Linux command "chmod a+x" to make my detract.py executable.

sansforensics@SIFT-Workstation:~$ ./dextract.py
Running dextract v2013-12-11 Initial Version

Usage:
Usage#1: dextract.py -d defnfile -f inputfile
Usage#2: dextract.py -d defnfile -f inputfile -a 350 -z 428 -o outputfile

Options:
  -h, --help      show this help message and exit
  -d DEFN         Template Definition File
  -f FILENAME     Input File To Be Searched
  -o TSVFILE      (Optional) Tab Seperated Output Filename
  -a STARTOFFSET  (Optional) Starting File Offset (decimal). Default is 0.
  -z ENDOFFSET    (Optional) End File Offset (decimal). Default is the end of
                  file.
sansforensics@SIFT-Workstation:~$

The following values are output by the script:
  • Filename
  • File_Offset (offset in decimal for the extracted field value)
  • Raw_Value (uninterpreted value from extracted field)
  • Interpreted_Value (currently used only for dates, it uses the Raw_Value field and interprets it into something meaningful)

The default outputting to the command line can be a little messy so the script can also output to a tab separated file (eg smstext.txt).

So getting back to our SMS example ...
We can run the script like this:

sansforensics@SIFT-Workstation:~$ ./dextract.py -d dextract.def -f /mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin -o smstest.txt
Running dextract v2013-12-11 Initial Version

Input file /mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin is 164 bytes

/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:0, nullterm str field = contactname, value = fozzie bear
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:12, defined str field = phone, value = 5551234
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:19, field = msgsize, value = 18
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:20, deferred str field = msg, value = Wokka Wokka Wokka!
Skipping 1 bytes ...
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:39, field = timestamp, value = 1387069205, interpreted date value = 2013-12-15T01:00:05
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:43, nullterm str field = contactname, value = kermit
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:50, defined str field = phone, value = 5551235
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:57, field = msgsize, value = 6
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:58, deferred str field = msg, value = Hi Ho!
Skipping 1 bytes ...
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:65, field = timestamp, value = 1387069427, interpreted date value = 2013-12-15T01:03:47
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:69, nullterm str field = contactname, value = Swedish Chef
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:82, defined str field = phone, value = 5554000
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:89, field = msgsize, value = 31
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:90, deferred str field = msg, value = Noooooooony Nooooooony Nooooooo
Skipping 1 bytes ...
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:122, field = timestamp, value = 1387080005, interpreted date value = 2013-12-15T04:00:05
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:126, nullterm str field = contactname, value = Beaker
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:133, defined str field = phone, value = 5550240
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:140, field = msgsize, value = 18
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:141, deferred str field = msg, value = Mewww Mewww Mewww!
Skipping 1 bytes ...
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:160, field = timestamp, value = 1387082773, interpreted date value = 2013-12-15T04:46:13

Exiting ...
sansforensics@SIFT-Workstation:~$

And if we import our "smstest.txt" output TSV into a spreadsheet application for easier reading, we can see:

Tab Separated Output File for all records in "test-sms-inbox.bin"


Note: The "readflag" field has not been printed and also note the Unix timestamps have been interpreted into a human readable format.

Now, say we're only interested in one record - the potentially insulting one from "kermit" that starts at (decimal) offset 42 and ends at offset 68.
We can run something like:

sansforensics@SIFT-Workstation:~$ ./dextract.py -d dextract.def -f /mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin -o smstest.txt -a 43 -z 68
Running dextract v2013-12-11 Initial Version

Input file /mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin is 164 bytes

/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:43, nullterm str field = contactname, value = kermit
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:50, defined str field = phone, value = 5551235
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:57, field = msgsize, value = 6
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:58, deferred str field = msg, value = Hi Ho!
Skipping 1 bytes ...
/mnt/hgfs/SIFT_WORKSTATION_2.14_SHARE/test-sms-inbox.bin:65, field = timestamp, value = 1387069427, interpreted date value = 2013-12-15T01:03:47

Exiting ...
sansforensics@SIFT-Workstation:~$

and the resultant output file looks like:


Tab Separated Output File for the "kermit" record only

Lets see our amphibious amore wriggle out of that one eh?

Limitations

The main limitation is that dextract.py relies on files having their data in distinctly ordered blocks (eg same ordered fields for each record type). Normally, this isn't a problem with most flat files containing one type of record.
If you have a file with more than one type of record (eg randomly combined SMS Inbox/Outbox with 2 types of record) then this script can still be useful but the process will be a bit longer/cumbersome.
You can use the start/end offset arguments to tell the script to extract a specific record from the file using a particular template definition (as shown previously).
For extracting another type of record, re-adjust the start/end offsets and point the script to the other template file.
Unfortunately, I couldn't think of a solution to extracting multiple record types randomly ordered in the same file (eg mixed Inbox/Outbox messages). Usually, there would be a record header/number preceeding the record data but we can't be sure that would always be the case. So for randomly mixed records, we're kinda stuck with the one record at a time method.
However, if the records were written in a repeated fixed pattern eg recordtypeA, recordtypeB (then back to recordtypeA), the script should be able to deal with that. You could set up a single template file with the definition of recordtypeA then recordtypeB and then the script will repeatedly try to extract records in that order until the end offset/end of file.

FYI As SQLite databases do NOT write NULL valued column values to file, they can have varying number of fields for each row record depending on the data values. Consequently, dextract.py and SQLite probably won't play well together (unless possibly utilized on a per record basis).

Obviously there are too many types of data fields to cater for them all. So for this initial version, I have limited it to the in-built Python types and some selected timestamps from Paul Sanderson's "History of Timestamps" post.

These selected timestamps also reflect the original purpose of cell phone file data extraction.

Supported extracted data types include:

# Number types:
# ==============
# x (Ignore these No. of bytes)
# b or B (signed or unsigned byte)
# h or H (BE/LE signed or unsigned 16 bit short integer)
# i or I (BE/LE signed or unsigned 32 bit integer)
# l or L (BE/LE signed or unsigned 32 bit long)
# q or Q (BE/LE signed or unsigned 64 bit long long)
# f (BE/LE 32 bit float)
# d (BE/LE 64 bit double float)
#
# String types:
# ==============
# c (ascii string of length 1)
# s (ascii string)
# Note: "s" types will have length defined in "num_types" column. This length can be:
# - a number (eg 140)
# - 0 (will extract string until first '\x00')
# - Deferred. Deferred string lengths must be set to a previously declared field_name
# See "msgsize" in following example:
# msg-null-termd | 0 | s
# msg-fixed-size | 140 | s
# msgsize | 1 | B
# msg-deferred | msgsize | s
# msg-to-ignore | msgsize | x
#
# Also supported are:
# UTF16BE (BE 2 byte string)
# UTF16LE (LE 2 byte string)
# For example:
# UTF16BE-msg-null-termd | 0 | UTF16BE
# UTF16BE-msg-fixed-size | 140 | UTF16BE
# UTF16BE-msgsize | 1 | B
# UTF16BE-msg-deferred | msgsize | UTF16BE
#
# Timestamp types:
# =================
# unix32 (BE/LE No. of secs since 1JAN1970T00:00:00 stored in 32 bits)
# unix48ms (BE/LE No. of millisecs since 1JAN1970T00:00:00 stored in 48 bits)
# hfs32 (BE/LE No. of secs since 1JAN1904T00:00:00)
# osx32 (BE/LE No. of secs since 1JAN2001T00:00:00)
# aol32 (BE/LE No. of secs since 1JAN1980T00:00:00)
# gps32 (BE/LE No. of secs since 6JAN1980T00:00:00)
# unix10digdec (BE only 10 digit (5 byte) decimal No. of secs since 1JAN1970T00:00:00)
# unix13digdec (BE only 13 digit (7 byte) decimal No. of ms since 1JAN1970T00:00:00)
# bcd12 (BE only 6 byte datetime hex string  eg 071231125423 = 31DEC2007T12:54:23)
# bcd14 (BE only 7 byte datetime hex string eg 20071231125423 = 31DEC2007T12:54:23)
# dosdate_default (BE/LE 4 byte int eg BE 0x3561A436 = LE 0x36A46135 = 04MAY2007T12:09:42)
# dosdate_wordswapped (BE/LE 4 byte int eg BE 0xA4363561 = LE 0x613536A4 = 04MAY2007T12:09:42)
#


How the code works? A brief summary ...

The code reads each line of the specified template definition file and creates a list of field names. It also creates a dictionary (keyed by field name) for sizes and another dictionary for types.
Starting at the given file offset, the script now iterates through the list of fieldnames and extracts/interprets/prints the data via the "parse_record" method. It repeats this until the end offset (or end of file) is reached.
The main function doesn't involve many lines of code at all. The "parse_record" function and other subfunctions is where things start to get more involved and they make up the bulk of the code. I think I'll leave things there - no one in their right mind wants to read a blow by blow description about the code.

Thoughts on Python

I can see why it has such a growing following. It's similar enough to C and Perl that you can figure out what a piece of code does fairly quickly.
The indents can be a bit annoying but it also means you don't have to spend extra lines for the enclosing {}s. So code seems shorter/purdier.

The online documentation and Stackoverflow website contained pretty much everything I needed - from syntax/recipe examples to figuring out which library functions to call.
It's still early days - I haven't written any classes or tried any inheritance. For short scripts this might be overkill anyway *shrug*.
As others have mentioned previously, it probably comes down to which scripting language has the most appropriate libraries for the task.
SIFT v2.14 uses Python 2.6.4 so whilst it isn't the latest development environment, I figured having a script that works with a widely known/used forensic VM is preferable to having the latest/greatest environment running Python 3.3.
I used jedit  for my Python editor but could have also used the gedit text editor already available on SIFT. You can install jedit easily enough via the SIFT's Synaptic Package Manager. Let me know if you think there's a better Python editor in the comments?

So ... that's all I got for now. If you find it useful or have some suggestions (besides "Get into another line of work Monkey!"), please leave a comment. Hopefully, it will prove useful to others ... At the very least, I got to play around with Python. Meh, I'm gonna claim that Round 1 was a draw :)

Wednesday 18 September 2013

Reflections of a Monkey Intern and some HTCIA observations


Inspired by the approaching 12 month point of my internship and this Lifehacker article, I thought I'd share some of my recent thoughts/experiences. Hopefully, writing this drivel will force me to better structure/record my thoughts. It's kinda like a memo to myself but feel free to share your thoughts in the comments section.

Communication

This is vital to any healthy internship. Ensuring that both intern/employer have the same/realistic expectations will help in all other areas.
Initially, I found it beneficial to over-communicate if I was unsure (eg explain what I did and then ask about any uncertainties). Interns asking questions are also a good way for supervisors to gauge understanding. Perhaps the intern's line of questioning might uncover additional subjects which the supervisor can help with.

Take detailed notes of any tasks you have performed. This includes the time spent (be honest!) and any notable achievements/findings. These notes will make it easier for you to communicate to your supervisor exactly what has been done.
Later, you can also use these notes to:
- help you pimp your CV (eg "I peer-reviewed X client deliverable reports") and
- see how far you've progressed (eg now it only takes me Y minutes to do that task).

Goal Setting & Feedback

Having an initial goal of "getting more experience" is OK but when the work load surges/subsides, it's easy to lose track of where your training was up to before the interruption. Regular feedback sessions where both parties can communicate short term goals (eg get more experience with X-Ways) can help keep you on track. They don't have to be long, formal discussions - if things are going well, it might only be a 5 minute conversation.
It's also easy to fall into a comfort zone and say "everythings peachy". Don't leave it all to your supervisor - think about other new skills/tools you might like to learn/apply.
Regular communication with your supervisor about the internship will also encourage/help them think about your progress.

The internship should be geared more for the intern's benefit rather than the employer but it is still a two way street. If you feel like your needs are not being met, speak up but also realise that there's mundane tasks in every job and that you can usually learn something from almost any task. The internship is all about experiencing the good, the not so good and the "I never want to do that ever again!".

Rules & Guidelines

Follow your supervisor's instructions but you don't have to be a mindless robot about it. Whatever the task, try to think of ways to improve/streamline a process/description. eg Would a diagram help with this explanation? Can I write a script to automate this task? Could we word this description better - if so, be prepared to provide alternatives. However, before you implement your game changing improvements, be sure to discuss them with your supervisor first!

Pace Yourself

As an intern, you are not expected to know everything. However, you can't sit on your paws and expect to be taught everything either. I guess it's like learning to ride a bike - your supervisor has done it before but there's only so much they can tell you before it's time for you to do it yourself. Along the way, you might fall/stuff up but that's all part of learning.
Everyone learns at different rates. Try not to get too high/too low about your progress. At the start, it's tempting to "go hard" but interns should also make the time to ensure that they are on-track. In this regard, knowing when to ask for help or for extra info can make an internship so much easier. If something feels like its taking too long, it's probably time to ask your supervisor for help.
Also, allow yourself time to decompress/be simian. This will require you to ask/know what work is coming up. Remember, they wouldn't be taking on an intern if business was slow but interns are (supposedly!) human too. We all need a break now and then. If you have a prior commitment, let your supervisor know as soon as possible.
I have noticed that I tend to get absorbed in a problem and can work long hours on it until it's resolved. However, when that's over, I like to slow things down to recharge the batteries. During this slower period (when the case load wanes), I might be doing research or writing scripts or just relaxing and doing non-forensic stuff. Knowing and being honest about your preferred working style can also help you choose the most appropriate forensics job (eg a small private company vs a large law enforcement agency).

Confidence & Mistakes

Despite my awesome cartooning ability, I would not say that I am a naturally confident and sociable person. New unknowns (eg social situations) can be a little daunting for me. However, I am learning that confidence is an extension of experience. The more experience you get, the more situational awareness you develop. I think this "awareness" can then appear to others as confidence (eg "Oh I've seen this before ... if we do ABC we need to think about XYZ").
I still cringe every time I realise that I've made a mistake but I also realise that mistakes are part of the learning process/experience. The main thing is to get back on the bike and not to repeat the mistake.
I also like to use my mistakes as motivation to achieve something extra positive. For example, if I make a mistake in one section of a report, I use it to as motivation to look for areas of improvement for the other sections. It's kinda corny but this pseudo self-competitiveness keeps things interesting (especially when writing reports).

Use Your Breaks/Free Time Wisely

Like most monkeys, I have found it easier to retain information by doing rather than reading (ie monkey-see, monkey-do). That said, there's no way I'm gonna be able to do everything.
One thing I like to do with my spare time is to try to keep current with DFIR news (eg new tools/technology, popular consumer applications). The trends of today will probably contain the evidence we'll need tomorrow. My approach is to read as many relevant blogs/forums as possible and understand that whilst I may not remember every detail, I understand enough so if/when I need this information, my monkey-brain goes "Yup so and so did a post on this last year" and I can re-familarize myself with the specific details.

Certification ... blech! I have mixed feelings about this. I am sure many recruiters just skim resumes looking for key words such as EnCe or ACE. Knowing a tool doesn't necessarily make you a better investigator. Knowing what artifacts the tools are processing and how they work, does. Writing your own tools to process artifacts? Even better!
However, as an intern looking for a full time job we also have to think of how to sell ourselves to an employer (no, not like that...). ie What skills/experience are employers looking for?
Obviously your chances of landing a full time job improve if you have some (certified) experience with the forensic tools that they use. While I have used various commercial tools for casework, I've also been fortunate that my supervisor has also let me use them to do additional practice cases. This has given me enough experience to get a vendor based cell phone certification that I can now add to my CV.
Regardless of whether your shop uses commercial or open source tools, getting some extra "seat time" working on previous/practice cases is a great way to improve the confidence/speed at which you work. And being an intern, your supervisor can also act as a trainer/coach.

Meeting New People

It's becoming apparent to me that in DFIR, who you know plays just as an important role as what you know. For example, your business might get a referral from someone you meet at a conference or maybe that someone can help you with some forensic analysis or land a new job.  Being a non-drinking, naturally shy intern monkey, meeting new people can intimidate the crap outta me. However, I also realise that it's a small DFIR world and that we really should make the time to connect with other DFIRers. Even if it's as simple as reading someone's blog post and sending them an email to say thank you. Or perhaps suggesting some improvements for their process/program. FYI Bloggers REALLY appreciate hearing about how their post helped someone.
Your supervisor is also probably friendly with a bunch of other DFIRers. Use the opportunity to make some new acquaintances.

HTCIA Thoughts

I recently spent 2 weeks with my supervisor before heading out to the HTCIA conference. It was the first time we had met in person since I started the internship but because we had already worked together for a while, it felt more like catching up with a friend.
During the first week, I got some hands-on experience imaging hard drives and cell phones (both iPhone/Android) for some practice cases. Having a remote internship meant that this was the first time I got to use this equipment which was kinda cool. I also practiced filling out Chain of Custodys and following various company examination procedures.
During the second week, I got to observe the business side of a private forensics company as we visited some new clients on site. I noticed that private forensics involves more than just technical skills and the ability to explain your analysis. A private forensics company also has to convince prospective clients that they can help and then regularly address any of the client's concerns. This increased level of social interaction was something that I hadn't really thought about previously. The concept of landing/keeping clients is probably the main difference between Law Enforcement and private practice.
As part of my supervisor's plan to improve their public speaking skills, they gave a presentation on Digital Forensics to a local computer user's group. After the main presentation, I talked for 10 minutes on cell phone forensics. Whilst it had been a while since I last talked in public, I was not as nervous as I'd thought I'd be. I think I found it easier because my supervisor gave great presentation and I could kinda base my delivery on theirs. I noticed that an effective presentation involves engaging the audience with questions (ie making them think), keeping a brisk pace and keeping the technical material at an audience appropriate level. The use of humour (eg anecdotes, pictures) can also help with pacing. Later, I would see these same characteristics during the better HTCIA labs.

HTCIA was held this year at the JW Marriott Hotel in Summerlin, Nevada. About a 20 min drive from the Las Vegas strip, you really needed a car otherwise you were kinda stuck at the hotel.
The labs/lectures started on Monday afternoon and ended on Wednesday afternoon.
The first couple of days allowed for plenty of face time with the vendors. Each vendor usually had something to give away. At the start, I *almost* felt guilty about taking the free stuff but towards the end it was more like "what else is up for grabs?" LOL. I probably did not maximise my swag but how many free pens/usb sticks/drink bottles can you really use?

Socially, I probably didn't mix as much as I could have. My supervisor and I spent a fair amount of time working on the new cases whenever we weren't attending labs/lectures. I still managed to meet a few people though and when I was feeling tired/shy I could just hang around my supervisor and just listen in/learn more about the industry. The good thing about forensic conferences is that most of the attendees are fellow geeks and so when there's a lull in the conversation, we can default to shop talk (eg What tools do you use? How did you get started in forensics?).

There were several labs that stood out to me. Listed in chronological order, they were:
Monday PM: Sumuri's "Mac Magic - Solving Cases with Apple Metadata" presented by Steve Whalen. This lab mentioned that Macs have extended metadata attributes which get lost when analysing from non HFS+ platforms. Hence, it's better to use a Mac to investigate another Mac. The lab also covered Spotlight indexing, importers and exiftool. As a novice Mac user, this was all good stuff to know. Steve has a witty and quick delivery but he also took the time and ensured that everyone could follow along with any demos.

Tuesday PM: SANS "Memory Forensics For The Win" presented by Alissa Torres ( @sibertor ). Alissa demonstrated Volatility 2.2 on SIFT using a known malware infected memory dump. She also gave out a DVD with SIFT and various malware infected memory captures. Alissa mentioned that the material was taken from a week long course so even with her energetic GO-GO-GO delivery, it was a lot to cover in 1.5 hours. The exercises got students to use Volatility to identify malicious DLLs/processes from a memory image, extract malicious DLLs for further analysis and also inspect an infected registry key. The handout also included the answers which made it easier to follow along/catch up if you fell behind. I had seen Alissa's SANS 360 presentation on Shellbags and Jesse Kornblum's SANS Webcast on Memory Forensics so I kinda had an inkling of what to expect. But there is just so much to know about how Windows works (eg which processes do what, how process data is stored in memory) that this HTCIA session could be compared to drinking from a fire hose. It would be interesting to see if the pace is a bit more easy going when Alissa teaches "SANS FOR526: Windows Memory Forensics In-Depth". However, I definitely think this session was worth attending - especially as I got a hug after introducing myself :) Or maybe I just need to get out of the basement more often LOL.

Wednesday AM: SANS "Mac Intrusion Lab" presented by Sarah Edwards ( @iamevltwin ). Sarah's talk was enthusiastic, well paced and well thought out - she would discuss the theory and then show corresponding example Macintosh malware artefacts. Sarah covered quite a bit in the 1.5 hours - how to check for badness in installed applications/extensions (drivers), autoruns, Internet history, Java, email, USB and log analysis. Interestingly, she also mentioned that Macs usually get hacked via a Java vulnerability/social engineering. It was good to meet Sarah in person and it also let me figure out the significance of her email address. It looks like her SANS 518 course on Mac and iOS forensics will be a real winner.

Overall, it was an awesome trip visiting my supervisor and a good first conference experience.  Hopefully, I can do it again real soon.
Please feel free to leave a comment about internships and/or the HTCIA conference below.

Friday 23 August 2013

HTCIA Monkey



Just a quick post to let you know that this monkey (and friends) will be attending HTCIA 2013 from 8-11 Sept in Summerlin, Nevada.
 So if you're in the neighbourhood, please feel free to play spot the monkey and say hello. I promise I won't bite ... unless you try to touch my bananas (heh-heh).

Friday 26 July 2013

Determining (phone) offset time fields



Let me preface this by saying this post is not exhaustive - it only details what I have been able to learn so far. There's bound to be other strategies/tips but a quick Google didn't return much (hence this post). Hopefully, both the post and accompanying script (timediff32.pl available from here) will help my fellow furry/not-so-furry forensicators determine possible timestamp fields.

In a recent case, we had a discovery document listing various SMS/MMS messages and their time as determined by both manual phone inspection and telecommunications provider logs.
However, whilst our  phone extraction tool was able to image the phone and display all of the files, it wasn't able to automagically parse the SMS/MMS databases. Consequently, we weren't immediately able to correlate what was in the discovery document with our phone image. Uh-oh ...

So what happens when you have an image of a phone but your super-duper phone tool can't automagically parse the SMS/MMS entries?
Sure, you might be able to run "strings" or "grep" and retrieve some message text but without the time information, it's probably of limited value.
Methinks it's time to strap on the caffeine helmet and fire up the Hex editor!

Background

Time information is typically stored as an offset number of units (eg seconds) since a particular reference date/time. Knowing the reference date is half the battle. The other half is knowing how the date is stored. For example, does it use bit fields for day/month/year etc. or just a single Big or Little Endian integer representing the number of seconds since date X? Sanderson Forensics has an excellent summary of possible date/time formats here.

Because we have to start somewhere, we are going to assume that the date/time fields are represented by a 32 bit integer representing the number of seconds since date X. This is how the very popular Unix epoch format is stored. One would hope that the simplest methods would also be the most popular or that there would be some universal standard/consistency for phone times right? Right?!  
In the case mentioned previously, the reference dates actually varied depending on what database you were looking at. For example, timestamps in the MMS database file used Unix timestamps (offset from 1JAN1970) where as the SMS Inbox/Outbox databases used GPS time (offset from 6JAN1980). Nice huh?
Anyhow, what these dates had in common was that they both used a 4 byte integer to store the amount of seconds since their respective reference dates. If only we had a script that could take a target time and reference date and print out the (Big Endian/Little Endian) hex representations of the target time. Then we could look for these hex representations in our messages in order to figure out which data corresponds to our target time.

Where to begin?

Ideally, there will be a file devoted to each type of message (eg SMS inbox, SMS outbox, MMS). However, some phones use a single database file with multiple tables (eg SQLite) to store messages.
Either way, we should be able to use a Hex editor (eg WinHex) to view/search the data file(s) and try to determine the record structure.

Having a known date/time for a specific message will make things a LOT easier. For example, if someone allegedly sent a threatening SMS at a particular time and you have some keywords from that message, then using a hex editor you can search your file(s) for those keywords to find the corresponding SMS record(s). Even a rough timeframe (eg month/year) will help narrow the possible date/time values.
For illustrative purposes, let's say the following fictional SMS was allegedly sent on 27 April 2012 at 23:46:12 "Bananas NOW or prepare to duck and cover! Sh*ts about to get real!".

OK assuming that we have searched and found a relevant text string and we know its purported target date, we now take a look at the byte values occurring before/after the message text.
Here's a fictional (and over simplified) example record ...

<44 F2 C5 3C> <ascii text="Bananas NOW or prepare to duck and cover! Sh*ts about to get real!"> <12 34 54 67> <ascii text="555-1234"> <89 12 67 89>

Note: I'm using the "< >" to group likely fields together and make things easier to read.

This is where our simple script (timediff32.pl) comes in to play. Knowing the target date/date range, we can try our script with various reference dates and see if the script output matches/approximates a particular group of 4 bytes around our text string.
Here's an example of using the script:

sansforensics@SIFT-Workstation:~$ ./timediff32.pl -ref 1970-01-01T00:00:00 -target 2012-04-27T23:46:12

Running timediff32.pl v2013.07.23

2012-04-27T23:46:12 is 1335570372 (decimal)
2012-04-27T23:46:12 is 0x4F9B2FC4 (BE hex)
2012-04-27T23:46:12 is 0xC42F9B4F (LE hex)

sansforensics@SIFT-Workstation:~$


We're using the Unix epoch (1JAN1970 00:00:00) as reference date and our alleged target date of 27APR2012 23:46:12.
Our script tells us the number of seconds between the 2 dates is 1335570372 (decimal). Converted to a Big Endian hexadecimal value this is 0x4F9B2FC4. The corresponding Little Endian hexadecimal value is 0xC42F9B4F.
So now we scan the bytes around the message string for these hex values ...
Checking our search hit, we don't see any likely date/time field candidates.

<44 F2 C5 3C><ascii text="Bananas NOW or prepare to duck and cover! Sh*ts about to get real!"><12 34 54 67><ascii text="555-1234"><89 12 67 89>

OK now lets try our script with the GPS epoch (6JAN1980 00:00:00) as our reference date ...

sansforensics@SIFT-Workstation:~$ ./timediff32.pl -ref 1980-01-06T00:00:00 -target 2012-04-27T23:46:12

Running timediff32.pl v2013.07.23

2012-04-27T23:46:12 is 1019605572 (decimal)
2012-04-27T23:46:12 is 0x3CC5F244 (BE hex)
2012-04-27T23:46:12 is 0x44F2C53C (LE hex)

sansforensics@SIFT-Workstation:~$


Now our script tells us the number of seconds between the 2 dates is 1019605572 (decimal). Converted to a Big Endian hexadecimal value this is 0x3CC5F244 . The corresponding Little Endian hexadecimal value is 0x44F2C53C .
Returning to our message string hit, we scan for any of these hex values and ...

<44 F2 C5 3C><ascii text="Bananas NOW or prepare to duck and cover! Sh*ts about to get real!"><12 34 54 67><ascii text="555-1234"><89 12 67 89>

Aha! The 4 byte field immediately before the text string "Bananas NOW or prepare to duck and cover! Sh*ts about to get real!" appears to match our script output for a LE GPS epoch! Fancy that! Almost like it was planned eh? ;)

So now we have a suspected date/time field location, we should look at other messages to confirm there's a date/time field occurring just before the message text. Pretty much rinse/repeat what we just did. I'll leave that to your twisted imagination.

If we didn't find that hex hit, we could keep trying different reference dates. There's a shedload of potential reference dates listed here but there's also the possibility that the source phone is not using a 4 byte integer to store the number of seconds since a reference date.
If you suspect the latter, you should probably check this out for other timestamp format possibilities.

OK so we've tried out our script on other messages and have confirmed that the date/time field immediately precedes the message text. What's next? Well my script monkey instincts tells me to create a script that can search a file for a text message, parse any metadata fields (eg date, read flag) and then print the results to a file for presentation/further processing (eg print to TSV and view in Excel). This would require a bit more hex diving to determine the metadata fields and message structure but the overall process would be the same ie start with known messages and try to determine which bytes correspond to what metadata. I'm not gonna hold your paw for that one - just trying to show you some further possibilities. In case you're still interested, Mari DeGrazia has written an excellent post on reverse engineering sms entries here.

Further notes on determining date/time fields

It is likely that there will be a several groups of bytes that consistently change between message entries. Sometimes (if you're lucky) these fields will consistently increase as you advance through the file (ie newer messages occur later in the file). So if you consistently see X bytes in front of/behind a text message and the value of those X bytes changes incrementally - it's possibly a date field or maybe its just an index.

An interesting observation for date/field offset integers is that as time increases, the least significant byte will change more rapidly than the most significant byte. So 0x3CC5F244 (BE hex) might be followed by 0x3CC5F288 (BE hex). Or 0x44F2C53C (LE hex) might be followed by 0x88F2C53C (LE hex). This can help us decide whether a date field is Big Endian or Little Endian and/or it might be used to determine suspected date/time fields.

Be aware not all time fields use the same epoch/are stored the same (even on the same phone).

I found that writing down the suspected schema helped me to later interpret any subsequent messages (in hex). For example:
<suspected 4 byte date field><SMS message text><unknown 4 bytes><ASCII string of phone number><unknown 4 bytes>
So when I starting looking at multiple messages, I didn't need to be Rain-man and remember all the offsets (eg "Oh that's right, there's 4 bytes between the phone number and last byte of the SMS message text"). In my experience, there are usually a lot more fields (10+) than shown in the simplified example above.

How the Script works

The script takes a reference date/time and a target date/time and then calculates the number of days/hours/minutes/seconds between the two (via the Date::Calc::Delta_DHMS function).
It then converts this result into seconds and prints the decimal/Big Endian hexadecimal/Little Endian hexadecimal values.
The Big Endian hexadecimal value can be printed via the printf "%x" argument (~line 90).
To calculate the Little Endian hexadecimal value we have to use the pack / unpack Perl functions. Basically we convert ("pack") our decimal number into a Big-endian unsigned 32 bit integer binary representation and then unconvert ("unpack") that binary representation as a Little-endian unsigned 32 bit integer (~line 92). This effectively byte swaps a Big-endian number into a Little endian number. It shouldn't make a difference if we pack BE and unpack LE or if we pack LE and then unpack BE. The important thing is the pack/unpack combination uses different "endian-ness" so the bytes get swapped/reversed.

Testing

This script has been tested on both the SANS SIFT VM (v2.14) and ActiveState Perl (v5.16.1) on Win7.

Selected epochs have been validated either using the DCode test data listed on http://www.digital-detective.co.uk/freetools/decode.asp or via known case data. I used selected dates given in the DCode table as my "target" arguments and then verified that my script output raw hex/decimal values that matched the table's example values.
The tested epochs were:
Unix (little/big endian ref 1JAN1970), HFS/HFS+ (little/big endian ref 1JAN1904), Apple Mac Absolute Time/OS X epoch (ref 1JAN2001) and GPS time (tested using case data ref 6JAN1980).

Note: Due to lack of test data, I have not been able to test the script with target dates which occur BEFORE the reference date. This is probably not an issue for most people but I thought I should mention it in case your subject succeeded in travelling back in time/reset their phone time.

Final Words

We've taken a brief look at how we can use a new script (timediff32.pl) to determine one particular type of timestamp (integer seconds since a reference date).
While there are excellent free tools such as DCode by Digital Detective and various other websites that can take a raw integer/hex value and calculate a corresponding date, if your reference date is not catered for, you have to do it yourself. Additionally, what happens when you have a known date but no raw integer/hex values? How can we get a feel for what values could be timestamps?
With this script it is possible to enter in a target date and get a feel for what the corresponding integer/hex values should look like under many different reference dates (assuming they are stored in integer seconds).

If you have any other hints/suggestions for determining timestamp fields please leave a comment.

Wednesday 17 July 2013

G is 4 cookie! (nomnomnom)

What is it?

A Linux/Unix based Perl script for parsing cached Google Analytic requests. Coded/tested on SANS SIFT Virtual Machine v2.14 (Perl v5.10). The script (gis4cookie.pl) can be downloaded from:
http://code.google.com/p/cheeky4n6monkey/downloads/list

The script name is pronounced "G is for cookie". The name was inspired by this ...




Basically, Google Analytics (GA) tracks website statistics. When you browse a site that utilizes GA, your browser somewhat sneakily makes a request for a small invisible .GIF file. Also passed with that request is a bunch of arguments which tell the folks at Google various cookie type information such as the visiting times, page title, website hostname, referral page, any search terms used to find website, Flash version and whether Java is enabled. These requests are consequently stored in browser cache files. The neat part is that even if a user clears their browser cache or deletes their user profile, we may still be able to gauge browsing behaviour by looking for these GA requests in unallocated space.

Because there is potentially a LOT of data that can be stored, we felt that creating a script to extract this information would help us (and the forensics community!) save both time and our ageing eyeballs.

For more information (eg common browser cache locations) please refer to Mari Degrazia's blog post here.
Other references include Jon Nelson's excellent DFINews article on Google Analytic Cookies
and the Google Analytics documentation.

How It Works

1. Given a filename or a directory containing files, the script will search for the "google-analytics.com/__utm.gif?" string and store any hit file offsets.
2. For each hit file offset, the script will try to extract the URL string and store it for later parsing.
3. Each extracted URL hit string is then parsed for selected Google Analytic arguments which are printed either to the command line or to a user specified Tab Separated Variable file.

The following Google Analytic arguments are currently parsed/printed:
utma_first_time
utma_prev_time
utma_last_time
utmdt (page title)
utmhn (hostname)
utmp (page request)
utmr (referring URL)
utmz_last_time
utmz_sessions
utmz_sources (organic/referral/direct)
utmz_utmcsr (source site)
utmz_utmcmd (type of access)
utmz_utmctr (search keywords)
utmz_utmcct (path to website resource)
utmfl (Flash version)
utmje (Java enabled).
You probably won't see all of these parameters in a given GA URL. The script will print "NA" for any missing arguments. More information on each argument is available from the references listed previously.

To Use It

You can type something like:
./gis4cookie -f inputfile -o output.tsv -d

This will parse "inputfile" for GA requests and output to a tab separated file ("output.tsv"). You can then import the tsv file into your favourite spreadsheet application.
To improve readability, this example command also decodes URI encoded strings via the -d argument (eg convert %25 into a "%" character). For more info on URI/URL/percent encoding see here.

Note: The -f inputfile name cannot contain spaces.

Alternatively, you can point the script at a directory of files:
./gis4cookie -p /home/sansforensics/inputdir

In this example, the script prints its output to the command line (not recommended due to the number of parameters parsed). This example also does not decode URI/URL/percent encoding (no -d argument).

Note: The -p directory name MUST use an absolute path (eg "/home/sansforensics/inputdir" and not just "inputdir").

Other Items of Note

  • The script is Linux/Unix only (it relies on the Linux/Unix "grep" executable).
  • There is a 2000 character limit on the URL string extraction. This was put in so the URL string extraction didn't loop forever. So if you see the message "UH-OH! The URL string at offset 0x____ appears to be too large! (>2000 chars). Ignoring ..." you should be able to get rid of it by increasing the "$MAX_SZ_STRING" value. Our test data didn't have a problem with 2000 characters but your freaky data might. The 2000 character count starts at the "g" in "google-analytics.com/__utm.gif?".
  • Some URI encodings (eg %2520) will only have the first term translated (eg "%2520" converts to "%20"). This is apparently how GA encodes some URL information. So you will probably still see "%20"s in some fields (eg utmr_referral, utmz_utmctr). But at least it's a bit more readable.
  • The script does not find/parse UTF-16/Unicode GA URL strings. This is because grep doesn't handle Unicode. I also tried calling "strings" instead of "grep" but it had issues with the "--encoding={b,l}" argument not finding every hit.
  • The utmz's utmr variable may have issues extracting the whole referring URL. From the test data we had, sometimes there would be "utmr=0&" and other (rarer) times utmr would equal a URI encoded http address. I'm not 100% sure what marks the end of the URI encoded http address because there can also be embedded &'s and additional embedded URLs. Currently, the script is looking for either an "&" or a null char ("x00") as the utmr termination flag. I think this is correct but I can't say for sure ...
  • The displayed file offsets point to the beginning of the search string (ie the "g" in "google-analytics.com/__utm.gif?"). This is not really a limitation so don't freak out if you go to the source file and see other URL request characters (eg "http://www.") occurring before the listed file offset.
  • Output is sorted first by filename, then by file offset address. There are a bunch of different time fields so it was easier to sort by file offset rather than time.

Special Thanks

To Mari DeGrazia for both sharing her findings and helping test the script.
To Jon Nelson for writing the very helpful article on which this script is based.
To Cookie Monster for entertaining millions ... and for understanding that humour helps us learn.
"G is 4 Cookie and that's good enough for me!" (you probably need to watch the video to understand the attempted humour)

Thursday 21 February 2013

Creating a Perl script to retrieve Android SMS


This script/post was inspired by Mari DeGrazia after she had to manually parse hundreds of Android SMS messages. Without her prior research and the principles she discusses in her post, there's little chance I would have attempted this script. Thanks for sharing Mari!
This post continues on from where Mari's post ended. We'll look further at an example Android SMS SQLite schema and then use it to explain how our SMS extraction script (sms-grep.pl) works. We will also walk you through how to use our script and what kind of output you can expect.

UPDATE 2014-01-23:
The code for sms-grep.pl has been revised/relocated to GitHub
It now handles Cell Header "Serial Type" values of 8 and 9 (ie Header values associated with 0 and 1).




UPDATE 2013-04-25:
Like a punch in the guts, some additional issues have arisen with the previous sms-grep.pl script (v2013-02-16).
Changes were made. Some bananas may have been thrown.


Issue #1
During further testing, we noticed that the initial version of sms-grep.pl was not reading some INTEGER fields correctly.
This has been corrected in the latest version 2013-04-14 with the addition of a new function "read_payload_int".
The previous script version tried to read payload INTEGER fields as VARINTs. This seems to work with positive integers less than 128 and so went unnoticed - as integer payload data in the "sms" table is typically limited to 1's or 0's. However, the "status" fields can read -1 and "thread_id" can be greater than 127 so the correction was made.

Issue #2
The script searches for a known phone number ("address" field) and then tries to go backwards a set number of fields until it hits the cell header size. Previously, it treated the payload fields prior to the "address" field (ie "thread_id") as VARINTs (like the cell header fields). As mentioned previously, this should not prove troublesome if the "thread_id" field is a single byte between 0 and 127. However, if the "thread_id" is greater than 127 or uses multiple bytes, there may be issues with ascertaining the cell header size and hence parsing the sms cell. See also the sms-cell-example-revised.png pic shown below in the original post.

The new version of the script requires the "-p" argument which represents the number of bytes between the last cell header field (VARINT) and the phone number "address" field. For our example schema, using "-p 2" means there's 2 bytes being used for the "thread_id" which sits in between the last cell header field and the "address" field.
This also means that to be thorough, the script will have to be run twice - once with "-p 1" and again with "-p 2" to cater for the possible 1 and 2 byte "thread_id" sizes. I decided to make it an argument rather than hard code it because the schema may change in the future. In practice, the "thread_id" will probably not exceed 2 bytes as the maximum "thread_id" value of 0xFFFF should be sufficiently large. If there's no bytes between the last cell header field and the phone number/search term field, you can use "-p 0".



Issue #3
As the rowid value is stored outside of the Cell Header and Cell Data sections, the script is currently unable to report the rowid value accurately. Typically, the Cell Header section will store a 0x0 value for the field that contains the rowid. Consequently, the script interprets the field value as 0.

Changes to Configuration File FormatAs a result of the changes made for Issue #2, the configuration file no longer requires the PHONE type marker for the "address" field.
Instead, the "address" field can be declared as TEXT and the "-p" argument is used to define the relationship between the "address" field and the last cell header field. The example pic of the sample configuration file format has been edited accordingly.

To run it you now type something like:

sms-grep.pl -c config.txt -f mmssms.db -p 1 -s "5555551234" -o output.tsv
and 

sms-grep.pl -c config.txt -f mmssms.db -p 2 -s "5555551234" -o output.tsv

Testing
Limited testing with Android sms test data was again performed.
The script now seems to handle multiple byte payload integers correctly with the new configuration file format.
As always, users should validate this script for themselves before relying upon the returned results (this is my CYA = Cover Your Ass section). What worked for our test data may not work for your's ...

END UPDATE 
2013-04-25 (Original post below also edited/revised)

Introduction

Android stores SMS records in the "sms" table of /data/data/com.android.providers.telephony/databases/mmssms.db. SQLite can also store backups of "sms" table data in the /data/data/com.android.providers.telephony/databases/mmssms.db-journal file (in case it needs to undo a transaction). Journal files are a potential forensic gold mine because they may contain previously deleted data which is no longer visible in the current database.
As far as I'm aware, there is currently no freely available way to easily view/print the sms contents of mmssms.db-journal files.
And while you can query the mmssms.db database directly via SQLite, this will not return any older (deleted) sms entries from database pages which have been since been re-purposed.
Our sms-grep.pl script seems to work well with mmssms.db and mmssms.db-journal files and also with unallocated space (although file size limited/hardware dependent).
Additionally, our script will interpret date fields and print them in a human readable format so no more hours spent manually checking/converting timestamps!
Our script is also configurable - so you should be able to use it to look at multiple Android SMS SQLite schemas without having to modify the underlying code.

But before we dive into the script - it's probably a good idea to learn about how SQLite stores data ...

The SQLite Basics

The SQLite database file format is described in detail in Richard Drinkwater's blog posts here  and here.
There's also some extra information at the official SQLite webpage.

OK, now for the lazy monkeys who couldn't be bothered reading those links ...
The basic summary is that all SQLite databases have a main header section, followed by a bunch of fixed size storage pages.
There are several different types of page but each page is the same size as declared in the header.
One type of page is the "table B-Tree" type which has a 0xD byte flag marker. This type of page is used to store field data from the whole database (ie data from all of the tables) in units called "cells". Consequently, this page type will be used to store "sms" table data and because the page format is common to both mmssms.db and mmssms.db-journal files - our carving job is potentially much simpler.
Pages can also be deleted/re-allocated for another type of page so we must also be vigilant about non-"table B-tree" pages having free space which contains old "table B-tree" cell data. Think of it like file slack except for a database.

A 0xD type (ie "table B-tree") page will look like:





Generic Layout of a 0xD page

We can see the 0xD byte is followed by:
- 2 bytes containing the 1st free cell offset (0 if full)
- 2 bytes containing the number of used data cells in page
- 2 bytes containing the 1st used cell offset
- 1 byte fragmentation marker

Then depending on the number of used data cells, there will be a series of 2 byte offsets which point to each used data cell (see the green section in the pic). The cell pointer list starts with the closest data cell first and ends with the "1st used offset" data cell. Each used data cell should correspond to a row entry in a table of the database (eg an "sms" row entry).
Following those cell pointers (green), will be any free/unallocated space (blue) followed by the actual data cells (purple). The blue area is where we might see older previously deleted "table B-tree" data.

Breaking it down further, the general form of a data cell (from the purple section) looks like:

Generic Layout of a Cell

We can see there's a:
- Cell Size (which is the size of the cell header section + cell data section)
- Rowid (ie Primary Key of the row)
- Cell Header section (compromised of a "Cell Header Size" field + a bunch of fields used to describe each type/size of field data)
- Cell Data section (compromised of a bunch of fields containing the actual data)

You might have noticed an unfamiliar term called a "varint".
Varints are type of encoded data and are used to save space. They can be 1 to 9 bytes and require a bit of decoding.
Basically, you read the most significant byte (data is stored big endian) and if it's most significant bit is set to 1, it means there's another byte to follow/include. Then there's a bunch of droppping most significant bits and concatenating the leftovers into a single binary value.
Richard Drinkwater's got a better explanation (with example) here.
Later for our script, we will need to write a function to read these varints but for now, just know that a varint can store anywhere from 1-9 bytes (usually 1-2 bytes though) and it requires some decoding to arrive at the "original value".

So for our Android SMS scenario, a typical used "sms" data cell might look like:

Android SMS Cell example

You'll notice that there's a "Cell Header" section highlighted in purple and a "Cell Data" section highlighted in pink.
Think of the Cell Header section as a template that tells us how many bytes to expect for each field in the Cell Data section. The Cell Data section does not use varints to store data.
From the sms-cell-example-revised pic, we can see that most of the Cell Header field types are 0x01 - which means those fields use one byte of data in the subsequent cell data section (pink). Also please note the potential for multi-byte varints for the "thread_id" data field circled in red.
The official SQLite documentation refers to these cell header field type values as "Serial Type Codes" and there's a comprehensive definition table about halfway down the page here.

For our sms example, we can see from the purple section that the sms "Read" and "Type" fields will use 1 byte each to store their data in the Cell Data (pink) section. Looking at the pink section confirms this - the "Read" field value is 0 (0 for unread, 1 for read) and the "Type" field is 1 (1 for received, 2 for sent).
As a rule, if the value of the cell header field type (purple section) is between 0x0 and 0x4, the corresponding data field (pink) will use that many bytes (eg 0x1 means 1 byte data field, 0x4 means 4 bytes)
If the value of a cell header field (purple section) is 0x5 (eg "Date" & "Date_sent" fields), it will take 6 bytes in the cell data (pink) section. The "Date" and "Date_sent" data fields are 6 byte Big Endian values which (for Android) contain the number of milliseconds since the Unix epoch (1 Jan 1970).
There's a special case for handling strings. Firstly, the cell header field type value must be odd and greater than or equal to 13. Then to calculate the number of bytes required in the data section we use this formula:

Number of bytes in string = (cell header field type value - 13)/2.

So in our sms-cell-example-revised pic, the corresponding string size for the "Address" field is (0x21 - 0xD) / 0x2 = (33 - 13) / 2 = 10 bytes. I haven't actually shown a value for the "Address" in the pink section so just use your imagination!
Similarly, we can see that the "Body" field will take (0x23 - D) / 0x2 = (35 - 13) / 2 = 11 bytes.
Note: For long sms, the varint containing the "body" header field type has been observed to require 2 bytes.

You might also have noticed that not all of the cell header fields declared in the cell header section (purple) have a matching entry in the cell data section (pink). This is because if a cell header field is marked as NULL (ie 0x00), it does not get recorded in the cell data section (eg the purple "Rowid" header field's 0x00 value means there won't be a corresponding data field in the pink section).
So if we want to retrieve data, we can't go strictly off the schema - we have to pay attention to the cell header section so we can interpret the cell data fields correctly.

So how do/did we know what cell data field was what?
It turns out that SQLite ensures that the order of cell header fields in the cell header (purple section) is the same order as the database schema field order. Consequently, the cell data section (pink) will also appear in schema order (notwithstanding any missing null fields).
We can get the schema of a database file using the sqlite3 command line exe like this:

sansforensics@SIFT-Workstation:~$ sqlite3 mmssms.db
SQLite version 3.7.11 2012-03-20 11:35:50
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .headers on
sqlite> pragma table_info(sms);
cid|name|type|notnull|dflt_value|pk
0|_id|INTEGER|0||1
1|thread_id|INTEGER|0||0
2|address|TEXT|0||0
3|person|INTEGER|0||0
4|date|INTEGER|0||0
5|protocol|INTEGER|0||0
6|read|INTEGER|0|0|0
7|status|INTEGER|0|-1|0
8|type|INTEGER|0||0
9|reply_path_present|INTEGER|0||0
10|subject|TEXT|0||0
11|body|TEXT|0||0
12|service_center|TEXT|0||0
13|locked|INTEGER|0|0|0
14|error_code|INTEGER|0|0|0
15|seen|INTEGER|0|0|0
16|date_sent|INTEGER|0|0|0
sqlite>


So we can see that the "sms" table consists of 17 fields with the first being the "_id" (ie rowid) primary key and the last being the "date_sent" field. In practice, the "_id" is typically unused/set to NULL as it is just duplicating the Cell's Rowid varint (from the white section). Some fields are declared as INTEGERS and others TEXT. Notice how the "date" and "date_sent" are declared as INTEGERS? These represent ms since UTC.
At this stage, I'm not 100% certain on every field's meaning. We know the "address" field is used to store phone numbers and the "body" field stores the sms text string. From Mari's research we also know that "read" is 1 for a read sms, 0 otherwise and "type" indicates sent (2) or recieved sms (1). That should suffice for now.

So that's the basics of the data structures we will be looking at. In the next section, we'll share some thoughts on the script. Oh goody!

The Script

At first I thought we could find each 0xD page and iterate through the data cells that way but this would miss any old sms messages contained in pages which have since been re-purposed by SQLite. That method would also miss any corrupted/partial pages containing sms in unallocated space.
So to find the sms messages, we are going to have to come up with a way of detecting/printing individual sms data cells.

The strategy we ended up using was based on the "address" field (ie the phone number).

1. We read in our schema and print flags from a configuration file.

2. We create one big string from the nominated input file.
Perl has a handy function called "index" that lets you find out if a given string is contained in a larger string. We use this "index" function to find all phone number matches and their respective file offsets.

3. For each match's file offset, we then work backwards and try to find the cell header size field (ie the start of cell header).
Looking at the sms-cell-example-revised pic, we can see that there are 17 (purple) varint fields plus the "thread_id" (pink) data field between the "Address" cell value (in pink section) and the cell header length/size field (in purple section). The number of varint fields should be constant for a given schema but it is possible for the number of bytes required for each varint to change (eg the "thread_id" data field to is typically 1-2 bytes)..

4. Now that we've determined the cell header size file offset, we can read in the header field type varints (ie find out how many bytes each field requires/uses in the cell data section) and also read in/store the actual data.

5. We then repeat steps 3 and 4 until we have processed all our search hits.

6. We can then sort the data in chronological order before printing to screen/file.

The main sanity check of this process is checking the cell header size value range. Remember, the cell header size value should tell us the number of bytes required for the entire cell header (including itself). So for our example schema above, this value should be:
- above the 18 byte minimum (ie number of schema fields plus the size of the cell header length = 17 + 1) and
- below a certain threshold (18+5 at this time).
Most "sms" cell header sizes should be 18 bytes (most of the fields are one byte flags) but for longer "body" fields or large "thread_id" field values, multi-byte varints have been observed which would obviously increase number of bytes required for that cell header. Allowing for an extra 5 bytes seemed like a good start for now.

For more information on how the script works (eg see how painful it is to read a multi-byte varint!) you can read the comments in the code. I dare you ;)

Making it Schema Configurable
As Mari has noted, not every Android phone will have the same schema. So instead of having a different script for each schema, we'll be utilising a configuration text file. Think of the configuration file as a kind of plugin for the script. Each phone schema will have it's own configuration file. This file will tell the script:
- what the schema fields are and more importantly, their order,
- which fields are DATES or STRINGS or INTEGERS and
- whether we want to print this field's values


For possible future needs, we have also declared a "c4n6mtype=android" field. This is in case we need to read an iPhone schema sometime in the future (iPhones use seconds since UTC for their DATE fields).

Here's an example of a configuration file (also provided from my GoogleCode Download page as "sms-grep-sample-config.txt"):

Sample Android Configuration File

Notice that it's very similar to the schema we got earlier from sqlite3?
The most significant differences are:

- "date" field (which is now listed as a DATE type)
- "date_sent" field (which is now listed as a DATE type)
- the configuration file uses ":" as a field separator (sqlite3 uses "|")
- the print flags (1 prints the field value, 0 does not print)

The script will ignore any blank lines and lines starting with "#".

Update 2013-04-25: The "address" field is now declared as TEXT (previously declared as PHONE).

Running the Script
The first step would be to determine the schema. The easiest way to do this is to use the sqlite3 client with the mmssms.db as previously shown. Admittedly, this requires access to a database file so if your don't have a sample to work with, your out of luck.
Next it's time to create the configuration file - making sure to mark the PHONE field and any DATE fields. Also remember to specify which fields you wish to print.
Once that is done, we can run the script using something like:

sms-grep.pl -c config.txt -f mmssms.db -s "5555551234" -s "(555) 555-1234" -p 1  -o output.tsv

Note: Users can specify multiple phone numbers/formats to search for using -s arguments. At least one -s argument is required.
If no -o argument is specified, the results will be printed to the screen in Tab separated columns - which can get messy with lots of messages. Alternatively, an output Tab separated file (TSV) can be generated (eg using -o output.tsv).

Update 2013-04-25: To be thorough, the script will have to be run twice - once with "-p 1" and again with "-p 2" to cater for the possible 1 and 2 byte "thread_id" sizes.

Any extracted hits will be printed in chronological order based upon the first DATE type schema field declared in the configuration file (eg "date" field for our example configuration file). You will probably see multiple entries for the same SMS which was stored at different file offsets. The date sorting makes this situation easier to detect/filter.

Here's a fictional TSV output example based on the previously shown config file:

Fictional sample TSV output from sms-grep.pl

The arrows in the pic are used by Notepad++ to indicate TABs. We can see that only the print fields marked with a 1 in the configuration file (ie address, date, read, type, subject, body, seen, date_sent) are printed along with the file offset in hex.

Note: If you run the script on a partially complete cell (eg the cell header is truncated by the end of file so there's no corresponding cell data), the script will print out "TRUNCATED" for any strings and -999 for any integer fields. If you see these values, further manual parsing/inspection is recommended.

Testing
Limited testing of the script (initial version) has been performed with:
- 2 separate Android schemas
- Unallocated space (as retrieved by Cellebrite and exported into a new 600 MB file using X-Ways/WinHex)
- A Raw Cellebrite .bin file (1 GB size)

Note: The script failed to run with a 16 GB .bin file - we suspect this is due to a RAM deficiency in our test PC.

As I don't have an Android phone, I've relied pretty heavily on Mari for testing. We've tested it using ActiveState Perl v5.16 on 64 bit Windows 7 PCs. It should also work on *nix distributions with Perl. I have run it succesfully on SIFT v2.14.

Additionally, an outputted TSV file has also been successfully imported into MS Excel for subsequent analysis.

Update 2013-04-25: Subsequent re-testing of the new version (2013-04-14) was limited to 2 different schema mmssms.db files in allocated space only. I reckon if it worked for these, it should still work for the other previous test cases.

Validation tips:
grep -oba TERM FILE
Can be used in Linux/SIFT to print the list of search hit file offsets (in decimal).
For example: grep -oba "5555551234" mmssms.db
Additionally, WinHex can be used to search for phone number strings and will also list the location of any hits in a clickable index table.  The analyst can then easily compare/check the output of sms-grep.pl.

What it will do
Find sms records to/from given phone numbers with valid formatted cell headers. This includes from both the SQLite database files (ie mmssms.db) and backup journal files (ie mmssms.db-journal). It should also find any existing sms records (with valid headers) that appear in pages which have been since re-allocated by SQLite for new data. Finally, it should also be able to find SMS from unallocated space (assuming the size isn't larger than your hardware can handle).

What it doesn't do very well
If the cell header is corrupted/missing, the script will not detect sms data.

The script does some range checking on the cell header size and prints a warning message if required. However, it is possible to get a false positive (eg phone number string is found and theres a valid cell header size value at the expected cell header size field). This error should be obvious from the output (eg "body" field has nonsensical values). The analyst is encouraged to view any such errors in a Hex viewer to confirm the misinterpreted data.

Unallocated space has proved troublesome due to size limitations. The code reads the input file into one big string for searching purposes, so running out of memory is a possibility when running on large input data such as unallocated. However, if you break up the unallocated space into smaller chunks (eg 1 GB), the script should work OK. I have also used WinHex to copy SMS data out from unallocated and paste it into a separate smaller file. This smaller file was then parsed correctly by our script.
The big string approach seemed like the quickest/easiest way at the time. I was targeting the actual database files/database journals rather than unallocated. So consider unallocated a freebie ;)

We have seen some SQLite sms records from an iPhone 4S which does NOT include the phone numbers. There may be another field we can use instead of phone numbers (perhaps we can use a phone book id?). This requires further investigation/testing.

Final words

As always, you should validate any results you get from your tools (including this one!).

This script was originally created for the purposes of saving time (ie reduce the amount time spent manual parsing sms entries). However, along the way we also learnt more about how SQLite stores data and how we can actually retrieve data that even SQLite doesn't know it still has (eg re-purposed pages).

The past few weeks have flown by. I originally thought we were creating a one-off script for Android but due to the amount of different schemas available, we ended up with something more configurable. This flexibility should also make it easier to adapt this script for future SQLite carving use (provided we know the schema). It doesn't have to be limited to phones!

However, this scripting method relies on providing a search term and knowing what schema field that term will occur in. There's no "magic number" that marks each data cell, so if you don't know/cannot provide that keyword and schema, you are out of luck.

I would be interested to hear any comments/suggestions. It can probably be improved upon but at some point you have to stop tinkering so others can use it and hopefully suggest improvements. If you do have problems with it please let me know. Just a heads up - for serious issues, I probably won't be able to help you unless you supply some test data and the schema. 
To any SANS Instructors reading, any chance we can get a shoutout for this in SANS FOR 563? Monkey needs to build some street cred ... ;)

Thursday 3 January 2013

Dude, Where's My Banana? Retrieving data from an iPhone voicemail database


This is a complementary post to Mari DeGrazia's post here about what to do when your tools don't quite cut the mustard. In today's post, I'll show how we can write a Perl script to retrieve the contents of an iPhone's voicemail database and then display those contents in a nice HTML table.

The first thing I *should* have done was Google it and see if anyone had written a similar script ... D'Oh!
But due to my keen-ness, I dived right in and using iPhone and IOS Forensics by Hoog and Strzempka (2011) plus some previous code I'd written, it took me a couple of days (at a leisurely end of year pace) to write this script.

Soon after I wrote this script, I learned that John Lehr had already written a bunch of similar iPhone scripts in Python in 2011. So while it looks like this monkey was a little late to the party, I still had fun learning and creating something.
You can view John's iPhone Voicemail script here.

My Python skills are pretty limited but it looks like my script is very similar to John's (except for the HMTL generation part). So I guess that's comforting - ie I didn't miss out on some obsure Apple incantation to Lord Jobs (I'm joking OK? Please don't sue me LOL).

Writing the script

First we use the DBI Perl package to read "voicemail.db". Next, we use the HTML::QuickTable package to print out the HTML table.
We've used both of these packages before (see exif2map.pl and squirrelgripper.pl posts), so it should be pretty straight-forward. Not being able to think of a clever and punny name, I'm just calling this script "vmail-db-2-html.pl". Catchy huh?

You can download the script from here. I'll spare you the agony of a line-by-line commentary and just delve into the most interesting parts.

So this is what the voicemail.db schema looks like (via the sqlite command line interface):
sqlite> .schema
CREATE TABLE _SqliteDatabaseProperties (key TEXT, value TEXT, UNIQUE(key));
CREATE TABLE voicemail (ROWID INTEGER PRIMARY KEY AUTOINCREMENT, remote_uid INTEGER, date INTEGER, token TEXT, sender TEXT, callback_num TEXT, duration INTEGER, expiration INTEGER, trashed_date INTEGER, flags INTEGER);

CREATE INDEX date_index on voicemail(date);
CREATE INDEX remote_uid_index on voicemail(remote_uid);


Using iPhone and IOS Forensics by Hoog and Strzempka (2011) pp. 193, 194 - the important bits (for us anyway) are located in the "voicemail" table. These are the:
ROWID =  Unique index number for each voicemail entry. Each entry's voicemail file uses the format "ROWID.amr" for the voicemail's filename. ROWID increments by 1 so if voicemails are deleted there will be discrepancies between the ROWID numbers and the current number of voicemail entries.
date = Date and time relative to the Unix epoch (ie seconds since 1 Jan 1970).
sender = Phone number of person who left the voicemail. Can be "null" presumably if number is witheld.
duration = Duration of voicemail in seconds.
trashed_date = Time when the user placed the voicemail in the "Deleted" folder or "0" if not deleted. This field is a Mac "CF Absolute Time" = number of seconds since 1 JAN 2001 (Thanks to Mari for pointing this out!). Consequently, we have to add 978307200 to our "trashed_date" before we can use it with any Unix epoch date functions (eg "gmtime"). Note: 978307200 is the number of seconds between 1 JAN 1970 and 1 JAN 2001.

Once we know the schema we can formulate our SQLite query (see line 74's "$db->prepare" argument):
"SELECT rowid as Rowid, sender as Sender, datetime(date, 'unixepoch') AS Date, duration as 'Duration (secs)', rowid as Filename, trashed_date as 'Deleted Date' from voicemail ORDER BY rowid ASC"

We're using the SQLite "as" functionality to create pretty alias names for the table headings. We're also using the SQLite "datetime" function to convert the Unix epoch "date" field into a YYYY-MM-DD HH:MM:SS string. The "trashed_date" will be handled later via the script's "printCFTime" function. For the moment, we will just retrieve the raw Mac "CF Absolute time" value.
The query results will be returned in order of ascending "rowid" and subsequently processed via the "PrintResults" function. 

Once we have the results from the database, we then store them in a variable (imaginatively) called "results_hash".
The "results_hash" variable is set from within the "PrintResults" function and involves some mucking around to get the required table fields (eg human readable trash date, HTML link to .amr files). Essentially, each entry of the "results_hash" has a key (the rowid) and an associated array of values (eg From, Date, Duration, Filename, Deleted Date).
Once we've got the "results_hash" all set up, we can then call HTML::QuickTable's "render" function to do the actual HTML table generation and then add in some of our own text for the number of rows returned.
The resultant HTML file will be called "vmail-db-2-html-output-X.html" where X represents a timestamp of the number of seconds since 1 Jan 1970.
Note: Due how the HTML::QuickTable renders hashes, the HTML table "rowid" entries are printed in textual rowid order (eg 1, 10, 2, 3).

Running the script

I tested the script on SIFT v2.14 with Perl v5.10 and also on Win 7 Pro-64 with ActiveState Perl v5.16.1.
Here are the Perl package dependencies:
DBI
HTML::QuickTable
Getopt::Long
File::Spec


If you run the script and it doesn't work, it's probably complaining that it can't find one of those packages.
To install a package X on SIFT you can use:
"sudo cpan X"
eg1 "sudo cpan HTML::QuickTable"
eg2 "sudo cpan DBI"
The 2 examples shown above will probably be the most likely culprits.
Also, after downloading the script on SIFT, you should ensure that it is executable by typing something like:
"sudo chmod a+x vmail-db-2-html.pl"

If you're using ActiveState Perl, just use the Perl Package Manager to install the relevant packages.

And here's the help text - if I've written it right, it should be all you need (Ha!)

sansforensics@SIFT-Workstation:~$ ./vmail-db-2-html.pl -h
vmail-db-2-html.pl v2012.12.28

Perl script to conjure up an HTML table from the contents of an iPhone's voicemail.db SQLite database.

Usage: vmail-db-2-html.pl [-h|help] [-db database] [-f folder]
-h|help ........ Help (print this information). Does not run anything else.
-db database ... SQLite database to extract voicemail data from.
-f folder ...... Optional foldername containing the .amr files for linking. If not specified,
the script assumes the .amr files are in the current directory.

Example: vmail-db-2-html.pl -f heavy-breather/vmails -db voicemail.db

The script will extract the voicemail data from voicemail.db and then
write HTML links to the relevant .amr using the nominated directory (eg "heavy-breather/vmails/1.amr")
The .amr files must be copied to the nominated directory before the link(s) will work.


Script Output

The script was tested using data from an iPhone 4S running iOS 6. Unfortunately, I cannot show you any actual case output and I also do not have any iPhone data of my own - so here's some fictional output just so you can see how purdy everything is ...

Example of command line ouput:
sansforensics@SIFT-Workstation:~$ ./vmail-db-2-html.pl -f heavy-breather/vmails -db voicemail.db

Now Retrieving Voicemail data ...

Rowid | Sender | Date | Duration (secs) | Filename | Deleted Date
1 | +12005551234 | 2013-01-01 00:00:01 | 25 | 1.amr | 2013-01-01 12:00:01
2 | +12005552468 | 2013-01-01 01:00:01 | 10 | 2.amr | 0
3 | +12005551357 | 2013-01-01 02:00:01 | 28 | 3.amr | 0
4 | +12005551123 | 2013-01-01 03:00:01 | 30 | 4.amr | 0
5 | +12005554321 | 2013-01-01 04:00:01 | 19 | 5.amr | 0
6 | +12005558642 | 2013-01-01 05:00:01 | 17 | 6.amr | 0
7 | +12005557531 | 2013-01-01 06:00:01 | 26 | 7.amr | 0
8 | +12005551234 | 2013-01-01 07:00:01 | 51 | 8.amr | 0
9 |  | 2013-01-01 08:00:01 | 41 | 9.amr | 2013-01-01 12:01:01
10 | +12005551234 | 2013-01-01 10:00:01 | 15 | 10.amr | 0

10 Rows returned

Please refer to "vmail-db-2-html-output-1357011655.html" for a clickable link output table

sansforensics@SIFT-Workstation:~$


Note1: Rows are printed in numerical rowid order for the command line output.
Note2: Null value for rowid 9 is left as a blank.

Here's the corresponding HTML generated file output example:




Note1: Rows are printed in textual rowid order for the HTML table (due to how the HTML::QuickTable renders)
Note2: Null values (eg for rowid 9) are displayed as a "-".
Note3: The HTML link to Filename will assume the user has copied the .amr files into the user specified folder (eg heavy-breather/vmails/1.amr). If no folder argument is given, the script will assume the .amr files are in the current local directory and link accordingly (eg 1.amr).

Final Thoughts

Mari's "Swiss Army Knife A $$$$$" tool did not process iPhone voicemail deleted dates or indicate if the voicemails were deleted. By writing this Perl script we were able to obtain this extra information that otherwise may have been missed.

By writing this script I also feel like I:
- Helped a friend and by sharing the solution, potentially helped other DFIRers.
- Improved my knowledge of iPhone voicemail. I had skim read iPhone and IOS Forensics by Hoog and Strzempka about 6 months ago but writing this script provided some much needed reinforcement. Additionally, I also learned how to handle yet another time format - the Mac "CF Absolute Time".
- Exercised my Perl coding skills. Like any language, skills atrophy if you don't use them regularly. This exercise also showed me the benefit of building up your own code library - I was able to "cut and paste" parts of my previous scripts into this new script thus saving time.

I'm not really bothered that I re-invented the wheel for this script. While John Lehr's script already provides the trashed date information - if I hadn't tried writing this, I would have missed out on a great learning opportunity.
I think in my case, "learning by doing" sticks in my brain better than learning exclusively via reading someone else's work. "Having a go" at something doesn't mean it has to be original or even successful so long as you are able to learn something from it. Sharing what you've learnt/helping others is just an added bonus.

Finally, one helpful tool for converting different date formats is the free "DCode" Windows exe from www.digital-detective.co.uk.
I used this tool to verify my script's arithmetic in converting "CF Absolute time" to a human readable time but it will also do a bunch of other conversions.

So thats about it for my first post of 2013. Any comments/suggestions are welcome.