Threatbear Logo
cybersecurity,  elastic

From Zero to Visibility in record time

Author

Hilton D

Date Published

With Security context is critical ; to make a simple analogy there is a huge difference between your wife holding a bread-knife during the daytime is a very different situation to an intruder wielding that same knife in the dark of night.

At Threatbear we help Aussie companies detect and respond to Cybersecurity threats and the workflow often goes like this :

Install an Osquery fleet server ~1day

Build the binaries and connect the endpoints ~1day+

Create a data enrichment pipeline to feed osquery events into our SIEM ~1day to a week

And then I repeat that process again for other areas that I want visibility into, for example :

Sysmon for Windows process + network data ~1day+ I still haven’t managed to build a Sysmon .msi file so it can be deployed using inTune.

Auditd for Linux process + network data ~1day+

As you can imagine this involves a lot of engineering work and involves switching context between data analysis, security and systems engineering, fighting with yaml, etc. This process can take days or weeks and is complex and distracts from the actual work of identifying threats and hunting for badness!

However with the release of Elastic 7.13 and Elastic agent the process is a lot simpler (and crazy simple if you are on Azure) :

Stand up an Elastic stack (I went for hot + frozen tier so that I can conduct threat hunts on data that is 6 months+ ago) ~5 minutes

Enable [Elastic] Fleet and download the elastic-agent binary for your platform ~30 minutes to manually install it on 2 hosts. This will be negligible once I have an Ansible role setup to do the dirty work for me!

That’s it — there are no more steps!

As a security analyst this is super exciting — I can now conduct hunts armed with this contextual information, all within a single pane of glass :

Process, File and Network events on Windows and Linux

Using osquery I can easy ask questions about almost any aspect of a system using the 100+ tables that ship with osquery by default, for example (users, processes, packages, ssh_authorized_keys, listening_ports, etc)

Here is a picture taken about an hour from having zero visibility :


Easily explore the relationship between processes and network flows

In the above picture I am exploring the execution parent-child relationship where cron has spawned a scheduled task which in turn has initiated scp remote file copy. What is great about this is I can then pivot on any of these axes to figure out :

Which user account initiated a process

What the process hash and working directory and arguments are

Details on the protocol, src_port, dest_port and IP address that it communicated with

How long the process ran for (this can be an indicator of malicious activity if there are many short-lived processes that terminate)

In short after about an hours work as an analyst I can do my job and start hunting for threats in record time — way to go Elastic!