Thursday, May 29, 2014

Rapid Response & Assessment - Data Collection

This post is a continuation of my RR&A (Rapid Response & Assessment) articles and the process I use. In my previous post here, I provided a procedure that details the initial step to the overall RR&A program by providing details on using EnCase Sweep Enterprise with some insights into the data points I review. That procedure can be accessed here. This post, along with its accompanying procedure, details the continued response after Sweep Enterprise has run by specifying the data points to collect. The procedure itself is just a rudimentary step through of data to collect. Nonetheless, it provides instruction for the non-technical people on my team to follow while I tend to more pressing matters. I have a couple of non-technical people I can turn to for data collection, so I will point them to this procedure.

When you don’t have a fully automated enterprise tool that can do all the heavy lifting for you, improvise you must to address a problem. When you have lemons, well, you know what you do.

I met with the folks from Mandiant not so long ago to discuss MIR and its offerings. I am incredibly impressed with MIR and really like the built in endpoint containment feature, something I am sorely lacking where I work. Until I can convince my employer on why I believe we need a tool like MIR, the best I can do right now is my “Roll your sleeves up and get yourself dirty” approach.  It works, however, it’s slow, laborious, and just takes too dam long, especially when I need to contain a host. I am referring to my RR&A procedure. The upside to this approach is you become acutely aware of what data you really need, that’s a good thing, the understanding. Too many folks spend time clicking about in a GUI not achieving much, then finally throwing up hands, “nope, can’t see it”. Urgh! Anyway, onward and upward I try everyday.

I do need to improve the speed and efficiency of this RR&A approach. It could start with an EnScript for the data collection piece and then look at ways to automate the analysis, or parts of it. I am not a scripter, so I will be reaching out to my in-house SME’s for input if I can pin then down for a moment.

The next few posts from here out will continue with the RR&A topic by looking at the analysis approach of each data point collected.

The procedure can be accessed here from Google drive.

Regards,

Mr.Orinoco

Thursday, April 24, 2014

Problems and Solutions

There are lots of problems in this line of work, however, let me get specific. I have a problem with my career path, and the poor direction it seems to have taken. You see, I have been working as a DF analyst for quite a while (8 years) performing mostly corporate investigations involving policy violations, legal matters, ethics cases, code of conduct matters, employee fraud, commodity malware, corporate security matters and HR matters. These are important concerns to protect the company, no doubt.

After years of performing these types of investigations, I feel the work has become stale and no longer challenging. With that said, my concern is that I am becoming stale, and that bothers me. So what am I doing about this dilemma?  Well, in one of my previous posts here, I listed several things I do to remain current as best as I can when the work is stale and training budgets are a shoe string. My last two training sessions were SANS Fire 2012 FOR 508 and EnCase 7 transitions. In my opinion, that is not sufficient to keep an analyst current. 80 Hrs. a year of classroom training should be the bare minimum with trips to conferences for good measure.

The title of this post is “Problems and Solutions”. I am going to start posting on problems I encounter and then list my attempts at finding solutions to them. The latest problem I find myself encountering, aside from the above concerns, is responding to matters that involve UNIX servers. This may not seem like a big deal to a lot of DF analysts, however, for me, seeing as I spend 99.99% of my time in the Windows User Land environment, it is a big deal. Thats because I have no previous exposure, or experience to UNIX. Aside from SIFT and MAC OSX if you will.

So what am I doing about it? That’s the solution part.

Responding to matters where you have no knowledge or experience in that particular area (UNIX) is not a pleasant feeling. I was very nervous and at a complete loss when asked, or pushed, into responding to issues involving UNIX. That was a few weeks ago. I do not like not knowing and feeling inadequate. So, to address that issue, I decided to tackle it the best way I could in the quickest way possible. I refused to continue on like this as I knew it would just be a matter of time before I am asked again to look at another UNIX server.

Here is what I did, or continue to do, to address the problem.

1. I Bought a NAS (QNAP) with UNIX embedded and started SSH terminal into the device to start learning about the OS and its layout, in addition learning about the the applications/services it has installed.

2. Placed the device on the Internet and watched the logs fill up as it was being attacked. Within 60 minutes it was being attacked from China and Eastern Europe. Amazingly fun.

3. Purchased “SAMS teach yourself UNIX in 24Hrs”. Banged through 5 Hrs. in one day. Love the “w” command. Feeling like a pro already. Continue through the lessons.

4. Got access to a UNIX server at work to understand the environment.

5. Engaged a UNIX engineer to hold meetings to discuss logs. It seems it’s all about logs in UNIX. Access logs, http logs, terminal history logs, SUlog, cron jobs, Proc Acct log.

These are the most active steps I am taking to address this problem. Now I am actually really enjoying UNIX. Why I did not start using and learning UNIX years ago is beyond me.

My point to this post is nicely summed up in the below quote.



Regards,

Mr.Orinoco


Friday, March 7, 2014

Rapid Response & Assessment - EnCase Sweep Enterprise

In Part 2 (here) of my posts that discus my Rapid Response and Assessment (RR&A) approach of endpoints, I listed EnCase Sweep Enterprise and my first tool to use when assessing said end point. Keeping in theme with my blog I have created the procedure to accompany that initial step of my RR&A. I will post the other procedures associated to my RR&A program as quickly as I can. 

The procedure is posted here on Google drive. I like to use Google drive as I can keep my procedures in an organized fashion on the backend and easily gain access to the actual document files when needed.

Regards,

Mr. Orinoco



Monday, February 17, 2014

The desire for more excitement and less frustration.

I work for an international organization. I am employed as a Digital Forensic Examiner for said organization and perform quite well at what I do I am told for the type of work I am exposed to. I am also looked upon as the IR fella; however, I have had little to no exposure or experience in that specific arena. That being, responding to major incidents at the enterprise level. This is both a blessing and a curse. A blessing because of no major incidents, a curse because it hinders my experience and ultimately my growth. All my training and experience is based on analyzing disks in the Windows desktop/laptop world addressing matters for Legal, Human Resources, Corporate Security, Lines of Business, Policy Violations and some commodity Malware investigations etc. These are all important, however, I am starting to feel the work is repetitive and hindering my growth into more advanced areas.

I have been doing what I do for a few years now and have gain much knowledge and experience in the area of disk forensics. I have built out a forensic infrastructure with a presence in major regions of the globe. I have put together our operating procedures for conducting investigations using either commercial tools or open source. Lastly, I have standardized our reporting templates for investigations.

I enjoy performing RAM dump analysis with Redline and Volatility, but wish I could do more. I get excited and the thought of performing my next time line with log2timeline and I really love digging into UsnJrnl files to witness the birth of hostile binaries to follow their life cycle. This stuff excites me. However, I have an issue as I suspect a lot of folks in the general workforce do. I love what I do, but not necessarily with whom and the circumstances under which I am doing it. That is where the frustration persists. It becomes frustrating when you learn new skills but don’t get to use them on a continual basis.

On a day not so long ago, I had an “Ah Ha!” moment that I knew would arrive eventually. It was just a matter of time. The thought that I processed went like this. ”If I am to truly grow in this field, I need to work around people with similar desires and wants”. I feel I currently do not have that where I am.

So what have I done to alleviate some of this frustration? While I attempt to figure out my options I am doing multiple things to keep pushing forward in this field.  For example, I created this blog to gain exposure and create an awareness of myself, my work, knowledge, thought process and interests. There are many other things a person in this line of work must do daily in an attempt to stay current with tools, procedures, artifacts and trends. My approach is listed below.

Education & Training

Education and continued training in any field is non-stop, especially one so technical as DF. My approach to continual education is shown below. It’s easy to go off in many directions in a field so broad as this, I find the key for me is to stay focused on core skills and needs, what your good at, to then expand into some areas of specialization as needs and/or desires require. For example, I am not a malware RE, which is a separate discipline all together. I will spend some time dedicated to static analysis, I think that is good, however, that’s as far as it goes. There is so much more going on with the core of what I do that I need to stay focused on it. It’s very easy to get distracted into something else in a field so broad. I don’t want to be a jack-of-all-trades, this field is so broad you can’t know everything, but at least have some kind of a specialty if you can. Say OSX for example.

Training

Official classroom training is a must. My opinion is 80hrs a year should be minimum. I’m an advocate for sitting in a classroom with industry peers and an instructor. Virtual training is OK, just not for me. I like real-time discussions and an instructor you can pull to the side.

Budgets are tight though when it comes to training. I’m lucky to get training to just satisfy my certificate maintenance. So what to do about it? Read on.

Books

Continue to read books, as many as time will permit. Below is an example of some of the books I have read.

·      Windows Forensic Analysis series, Harlan Carvey
·      Windows Internals 6, Part 1, Mark Russinovich
·       Practical Malware Analysis, Michael Sikorski, Andrew Honig

This is just a sample, this list goes on, but you get the idea.

Blogs

There are many blogs out there for this line of work. It’s easy to get lost in the amount blogs available. They key is to pick a few core blogs, follow them and then have a couple of specialty ones. Below are examples of core blogs I follow.


Webinars

What can I say except Mandiant, Mandiant and more Mandiant. These folks have amazing webinars. You must subscribe to their Fresh Prints of Malware webinars, they are incredibly useful. They have some great tools also. Because I am an EnCase user I also sit through Guidance Software’s webinars too. Again, very useful information on using their tools.

White Papers

Take the time to read white papers that get released by various vendors and institutions. The authors of these papers spend a lot of time and research producing them.

Tools

There are two camps here, Commercial and Open Source.  I use a combination of both. The amount of tools available today to parse evidence or log files is vast. Try them out, however, you must settle on what works for you, have a core set. Commercial tools cannot do everything; therefore I have a set of open source tools for processing certain pieces of data. For example, I use EnCase Enterprise for my acquisitions, case processing, bookmarking evidence etc. However, I use reg ripper for processing registry hives because it so good and efficient. It’s easy to get overwhelmed with the amount of tools, especially the open source tools. Don’t end up with a Downloads folder full of tools that you don’t recall what they do. Download and test, verify against your test data, but if you cant use the tool for whatever reason, put it to the side and move on.

Testing

I have test data that is my known good data. I know the image well. Perform as much testing against the data as you can continuously. When time permits I ask myself, “What piece of data or tool do I want to test”. I may have read a blog entry somewhere and want to test something mentioned in the blog.  I keep a separate bookmark folder in my browser called “To Read”. These are webpages that are my “must read list” or something I want to test.

Procedures

I create the procedures for my group. I identify a piece of data that I am interested in and create the official procedure for my group to follow for parsing the data. This requires research into the data point, identifying the right tool, testing with my test data, reviewing the results and confirming them, finally creating a procedure for folks to follow.

Knowledge Sharing

I share what I know. This builds relationships and fosters a positive work environment. We are all in this together, the work is challenging and often frustrating when you don’t know a certain thing.  Its only through sharing can we expect any of us to become better analysts.

That’s it for today.

P.S My next book to read is on pre-order. The Art of Memory Forensics.

Even though I don’t get to do much in the way of memory analysis right now, my standard operating procedures state to perform a RAM acquisition regardless of the investigation. Never a bad thing and you might just need it during what starts out as a run of the mill policy violation case.


Thursday, January 30, 2014

Rapid Response & Assessment of a Suspect Device – Part 2

In Part 1 of this post I discussed a number of items that can influence the success of a Rapid Response & Assessment (RR&A) program build out. These are items that I believe must be considered, or I have experienced or been exposed to at some level. In Part 1 I termed these 13 items “Recipe for Success”. Anyone of these 13 elements can affect the success you have with your RR&A program. 

The bottom line to RR&A is speed and accuracy. Personally, I still need to automate parts of my RR&A program to collect and analyze the data more efficiently than I do right now. I’m never satisfied and want to improve my procedures. I’m working on this and will be reaching out to my “programmer” colleague’s for help.

Just for clarity, as I stated in Part 1, there is more than one-way to skin a cat, or triage a suspect host.  I welcome feedback so I may improve my own program. What I have documented here is just one way of triaging a suspect host. This is based upon what is available to me at my organization using my Recipe for Success as a guide. As I improve my procedures (automation) for processing data, identify additional data points to process, along with tools that give the best bang for the buck, I will continue to fine-tune my program. Its an ongoing refinement that never ceases. The available data artifacts should drive your procedures, not the tools you have. For example, although I use EnCase Enterprise for data collection, this tool plays one part, albeit an important one, of the overall process. I utilize tools such as RegRipper (Carvey) and auto_rip (Harrell) to process registry data because they are so efficient and get to the heart of the matter in a targeted fashion with little effort. I know the data I want to analyze, I know where it is, Reg Ripper and auto_rip get me to that data very quickly. 

In this Part 2, I am going to discuss procedures I use to analyze the data once it has been collected. Shown below is the list of data sets I collect for analysis during RR&A. We will pick this up from STEP 2, EnCase Sweep Enterprise.

Step 1 - Attach to target machine with EnCase Enterprise
Step 2 - Perform a Sweep Enterprise (a very useful EnCase feature)
Step 3 - Collect RAM dump
Step 4 – Collect Prefetch files
Step 5 – Collect the $MFT
Step 6 – Collect UsnJrnl file
Step 7 – Collect NTUSER.DAT and UserClass.dat for logged on user
Step 8 – Collect SYSTEM, SOFTWARE, SAM, and SECURITY hives.
Step 9 – Collect Application, Security and System event logs

Procedures

Step 2 - Perform a Sweep Enterprise

Collection/Analysis Tool: EnCase Sweep Enterprise

Sweep Enterprise is a very useful tool built into EnCase Enterprise, especially the newer version in EnCase Enterprise v7.09. The tool can provide a very detailed look into the current activity on a given system where the EnCase agent is installed. You simply enter the IP address of the target device into the Sweep console and execute. You get back an incredible amount of data to assist with the triage of your target device. For example, it is possible to receive back Network Connections, Open Ports, Running Processes and Open Files, among many other items that are available.

The items I review to get a quick overview of system activity are Network Connections/Ports, Running Processes, Open Files and Logged on Users. I realize this is specific to EnCase and not all operations have the funding for this product, but as I stated earlier, this is how we are performing RR&A based upon our Recipe for Success influences. 

Network Connections

This lets me see current network connections from the target machine. That being connections to both internal and external endpoints. This feature of Sweep Enterprise will also show me the process that has a specific connection established. As you can imagine, this one piece of data is very useful in identifying and evaluating established connections to end points. 

Running Processes

The next item up is identifying and assessing processes running on the target device. If you are intimate with Windows processes, or have an approved MD5 baseline to reference, as you start to pick through each running process you can discount normal ones. This should leave you with a short list of processes that require further evaluation as to their legitimacy. Speaking to further evaluation, this would be especially true if a given process is talking to a notable IP address on the public Internet. This is where you can tie in Network Connections to Processes for suspicious activity. As an example, one such obvious concern would be the discovery of Explorer.exe talking to an IP address located in Eastern Europe. That would be highly notable and require additional follow up. 

Open Files

The next item that is up is open files. This feature allows the examiner to see what files are currently open by the individual processes. Very useful to see who has what open. 


Logged on Users

Another useful feature is “Logged on Users”. Reviewing this data set upfront will provide me guidance on who is currently logged on to the device, thus giving me indication of what user specific registry hive (NTUSER.DAT) to pull.

Step 3 - RAM Analysis

Analysis Tools: Redline, Volatility
Links: 

EnCase Enterprise gives you the ability to collect the RAM contents from your suspect host. Once the data is collected you can then parse the RAM dump with tools such as Mandiant Redline and Volatility. 

I like to use Redline first to see if I have any notable processes that have been “Redlined”. If I do, then off to the races I go with Volatility to analyze the data. Volatility if far more powerful to analyze RAM dumps. Redline is great, but I just choose to use it to triage my RAM dumps and then get down to the nitty gritty with Volatility. I’m still learning Volatility as use it as often as I can. It has proven to be very useful. 

Step 4 – Collect Prefetch Analysis

Tool: WinPrefetchView

One of my favorite artifacts is prefetch files. These files, when available, contain a wealth of information when analyzed with a parser. They are the low hanging fruit to see what binary files have executed on a Windows system. If you know Windows systems and your environment well enough, any anomalies should not take that long to identify.  Often, when you have identified a notable process via the EnCase Sweep Enterprise results, you can turn to prefetch files to see prefetch artifacts for said process, thus increasing confidence in the data you are seeing.

Step 5 – $MFT Analysis

Analysis Tool: AnalyzeMFT

I collect the $MFT file and then parse with the tool AnalyzeMFT. By doing this I get a timeline of the file system to perform a cursory review around the suspect date/time for any notable entries. I realize that a lot of malware self deletes its own files as it is preparing to install and get up an running, nonetheless, the MFT is essentially the file system and a lot of the goodies are here for review in timeline format. The more artifacts you review, the more you start to notice anomalies. By taking this approach, you start to build a picture in your head of what has possibly gone on here with this suspect host. You still have more artifacts to review, however, a picture is starting to evolve.

Step 6 – $UsnJrnl file Analysis

Analysis Tool: TZ Works Journal Parser

I love this file! Weird to say, yes. But I don’t care. This file is incredibly useful. If you want to see the life cycle of a file from birth to death, you can do that by analyzing this file. This file, if acquired in time, can pick up from where the MFT left off. Meaning, the files that the malware deleted as it was getting itself up and running, and thus possibly no longer identifiable in the AnalyzeMFT results can often be identified in this file. Part of the results from the JP tool will show you the MFT# and the sequence # so you can follow the birth and death of a given file. 

Step 7 – NTUSER.dat & UserClass.dat Hive Analysis

Analysis Tool: Auto_rip.exe 
Links: 

Note: Before I comment on this analysis and tool, I can’t emphasize my gratitude enough to Corey Harrell for creating it. This tool automates the use of Reg Ripper plugins.  Reg Ripper was created by Harlan Carvey, even more gratitude to Harlan for creating this tool. Read the blog post by Corey on his site to get an understanding of what the tool does.

The NTUSER.dat and UserClass.dat files we need to analyze are for the logged on user. As far as analysis is concerned, the data we collect from Step 7 and Step 8 below actually go together to be parsed with auto_rip. 

When auto_rip is unleashed against the required hives, it parses the hives seeking out data identified in the available plugins. You get back an incredible amount of specific targeted data. 

I have already published a procedure for this tool. You can view the procedure from the below link.

Parsing Windows 7 Registry Hives with auto_rip 

Step 8 – SYSTEM, SOFTWARE, SAM, and SECURITY Hive Analysis.

Analysis Tool: Auto_rip.exe 

See STEP 7 above and then the mentioned procedure.

Step 9 – Security Event Log Analysis

Analysis Tool: Event Log Explorer

In this step I am looking for specific data in the event logs. The first log I pull up and filter based upon my identified timeline is the Security Event Log. I am assuming auditing is turned on, and quite frankly if it is not, then go turn it on now.

Security Event Log

In this log I am looking for “Process Creation”. Many times I have quickly identified rogue processes starting up by simply looking in this log. Typically, the notable process(es) has some funky looking name. That is a clear indication that this device needs additional review to seek out the source of that strange looking process. I may move onto the System and Application log later if need be.


Summary

To summarize, to successfully triage a suspect host, we are targeting very specific data points. We are not waiting 2, 4, 6, 8 hours to image an entire disk. That takes too long, and quite frankly, is not needed to perform RR&A. By targeting very specific data points we can get the answers we need quickly and make decisions even quicker as to the next steps.

As I stated earlier, this is my way of performing RR&A. Other organizations do it differently. I would really like to hear your thoughts on my approach or how your perform RR&A. Please comment if you have a moment. 

Regards,

Mr. Orinoco

Sunday, January 12, 2014

Rapid Response & Assessment of a Suspect Device - Part 1

Now the holidays are out of the way I can get back to posting. This post will be a two part series. I tried to get it out in one part, however, time and commitments are not allowing it.

As the title suggests, this topic is on Rapid Response & Assessment (RR&A) . When a suspect device comes your way, how do you respond to get the answers you need quickly? The two posts will discuss a given approach that has evolved within my environment based upon the influences I am exposed to.

I hope you enjoy the read and I encourage feedback. I want to know how your organization deals with RR&A.


PART1

Preface
Rapid Response & Assessment (RR&A) is exactly that. A quick response by IT Security personnel assessing a potential concern they have been alerted to. Get it right; by implementing the correct program, you will be in a good position to fend of the bad guys and minimize the potential damage that will undoubtedly unfold. Get it wrong, well, no need to explain that outcome! 

Depending on the tools, resources (head count & money), talent, company culture, management support, Corporate Security & HR Support, priorities, policies, processes, internal politics, significance in the global economy, regulations, and the sense of urgency that individuals within the organization place on Rapid Response & Assessment, some, or all of these influences may contribute to how individual organizations respond to suspect devices that appear on their radar for one reason or another. No two organizations are alike, and so goes their RR&A approach. My organization is no different. We execute our existing Rapid Response and Assessment procedure based upon the vast majority of the above-mentioned items. Is it perfect? No. Is it driven by a person with a passion for doing this stuff who is continuously tweaking it and fine tuning it, always trying to do the right thing for his employer? Absolutely. 

In no particular order, let’s take a cursory look at the above-mentioned items. 

RR&A – Recipe for Success 

1. Tools – Do you own tools that can give you rapid unfettered access to remote hosts to perform assessments?

2. Resources – Do you have the people and money to operate?

3. Regulations – Are there any regulations setting minimum requirements for on-site full time Incident Response and Digital Forensic personnel?

4. Talent – Do you have the right people with the required skill sets?

5. Culture – What is the cultural attitude within the organization on this issue?

6. Management Support – Do the correct management within the organization support your efforts?

7. Priorities – Where does RR&A stack up on the priority list?

8. Policies – Do you have policies in place to guide your response?

9. Processes – Do you have tested & mature processes to execute RR&A? 

10. Politics – Are there any inter group politics inhibiting your RR&A program?

11. Significance – What is your company’s significance within the global economy?

12. Urgency – Does your organization have an inherent sense of urgency to deal with these matters?

13. Corporate Security & HR Support – This one is an absolute must, however, you need to have all your ducks lined up to leverage ongoing support for your program. Do you have support from the folks in Corporate Security and Human Resources when you send them reports that require them to issue a slap on the wrist when employee misconduct is the cause? After all, do you want to see your report that details a virus outbreak as a result of an employee policy violation disappear into a black hole on the HR Managers desk after you have spent many hours putting it together? You must partner with these folks and make them clearly understand what it is you are bringing to them. Your concerns and recommendations must emphasize the seriousness of the mater and lay out the potentially devastating consequences based upon the users actions. For example, this could have been much worse and here’s why.

Any one of the above items can impact the level of success you enjoy with your RR&A program.  

The purpose of this post is to detail the RR&A approach and methods used by my organization, in particular, my group, in hope that other individuals who may be struggling in this area may get some ideas on how they might be able to address RR&A in their own organizations, or I may get some feedback and suggestions on how to improve my own program. Even if this approach cannot be replicated due to specific constraints (see recipe for success), there may be pieces of the approach that can be utilized. It boils down to sharing knowledge, ideas, and varying methods/approaches in achieving a goal. Responding to an incident as quickly and as efficiently as possible to determine if there is a problem or not? I would like to hear how other organizations perform RR&A.

Shown below is high-level view of my environment. The end node environment is made up of end user devices such as desktops and laptops and then there are servers of various types. In this particular environment the Rapid Response & Assessment resides with the Forensic Response Team.

The Environment
30,000 Windows desktop devices (PC’s/Laptops) spread across multiple continents
4,000 Windows servers spread across multiple data centers

The Monitoring and Response Teams
Security Operations Center  - First Alert, Triage & Incident Response Coordinators
Threat Management Team – Net Flow Analysis, Packet Analysts, Malware RE
Forensic Response Team – Host Assessment and Disk Forensics

The whole point of RR&A is to be in a confident position (see recipe for success) with mature and tested procedures (test and trust your procedures) that can quickly confirm or deny if a problem exists. You will need to determine very quickly if a reported incident will require additional follow up, or if you can stand down because it was a false alarm. My group typically responds to devices where the SOC had detected an anomaly based upon output in the SIEM (Security Information & Event Management). To be able to respond effectively and efficiently, you need the right tools deployed into the environment. For a large enterprise that is spread over multiple continents, an agent-based tool is best suited to give the responder instant access to the end node under question.

As the saying goes, there is more than one way (and tool) to skin a cat. The information here in one example of how a given organization performs RR&A. Again, see the Recipe for Success noted above that can influence your program and approach.

Tools & Procedures
Today’s response tools need to be as capable and versatile as the malware we are encountering. What does this mean? Well, the first thing that comes to mind is a tool that can perform remote RAM dumps (aka Volatile Data) and bring back the volatile data for analysis. Additionally, you need to be on a position to cherry pick the files of interest that will be reviewed during your assessment. For example, during incident response you need to be able to perform a RAM dump, start processing the dump via a tool such as Mandiant’s Redline, or Volatility, while at the same time collect and process other parts of the system activity. The particular tool used by my group to achieve the remote collection is EnCase Enterprise. 

Analysis Tip: Use the 32-bit version of EnCase to acquire RAM dumps on 32-bit target devices.

At a high level, the RR&A goes something like this…Alert, Respond, Pull Data, Analyze, and Decide.

Alert
The Alert will come from the SOC. They have seen something suspicious, deemed it a concern, and opened up an incident ticket.

Respond
Once the incident ticket hits us, RR&A personnel are engaged to act.

Pull Data
The device is identified, attached to with our Enterprise Tool; a procedure is followed to start pulling Volatile Data and select files. The data of interest that is pulled is listed below in the order it is acquired.

Step 1 - Attach to target machine with EnCase Enterprise
Step 2 - Perform a Sweep Enterprise (a very useful EnCase feature)
Step 3 - Perform a RAM dump
Step 4 – Collect Prefetch files
Step 5 – Collect the $MFT
Step 6 – Collect UsnJrnl file
Step 7 – Collect NTUSER.DAT and  UserClass.dat for logged on user
Step 8 – Collect SYSTEM, SOFTWARE, SAM, and SECURITY hives.
Step 9 – Collect Application, Security and System event logs.

The collection of the above data does not actually take that long. The forensic platform design and the placement of equipment throughout the environment facilitate the expeditious collection of the data and deliver it to the examiner to start analyzing.

Analyze
After pulling the data outlined above, we need to quickly parse it with our tools in an attempt to identify any notable concerns. Of note, the Sweep Enterprise feature built into EnCase is an incredibly useful feature. By entering the IP address of the target machine into Sweep Enterprise and executing it, we gain much visibility into the device activity on a number of levels. For example, it allows you to see running process, network connections, open files etc.

Decide
A decision is made on whether to pursue the device further. False alarms do happen and this allows the SOC to fine tune their alerts.

In part two of this post I will be discussing the actual procedures used to parse the collected data looking for indicators of compromise. The post will detail the tools and approach based upon the data points we want to analyze.