Skip to content

Lab 3.1: Baby’s First Detection (KQL)

Est. Time: 45 minutes

Goals:

  • Configure a Jira connection in Elastic
  • Create a detection for someone querying the local users on a host using net user (T1087.001: Account Discover: Local Account)
  • Trigger your alert

Instructions

  1. For this lab you will be creating a detection to look for the MITRE technique T1087.001, which is Account Discover: Local Account. This technique is used by attackers to get a listing of local system accounts. It can be done a number of ways.
  2. The specific method we’re going to be looking for today is net user, which lists the local users on a host.
  3. Every good detection starts with a query to see what results are returned. So start in Discover by building a query to look for that data.

    1. There are plenty of ways to look for this. One way you could do is search for something like process.command_line: *net*user* but that is a lot of load on the cluster for such a simple search since there are three wildcards.
      • Why this search would have so much load
        1. Leading wildcards: Elastic would normally have starting point for which to search other results, but a leading wildcard requires Elastic to look at all events
          1. For this reason leading wildcards can actually be disabled on a cluster
        2. *user*: Once Elastic has determined what events contain net it must then look for “user” anywhere after that, with any characters after it
    2. This is where I’ll introduce you to the very cool field called process.args. This field indexes the arguments in a process.command_line field as an array. We can use this to match a command event that meets multiple command line arguments, even with other things before, after, or between!
      1. Fun fact, this also means if an attacker were to run net user, our detection should still work, since that is still just two arguments. Same for even if they ran net user > something.txt
    3. Below is the basic query I settled on for my detection. Yours may look different, and that is fine. I encourage you to work with your own query.

      process.name: net.exe and process.args: user
      
      1. And it matches:

        Untitled

      2. Note that in this case if I matched on both of the args “net” AND “user”, it wouldn’t match, since the first argument is actually the net full path.

      3. Regardless of where the user runs this event, PowerShell, Cmd, whatever; the actual executable of “net.exe” will be executing this event. So I can specify that executable specifically to hugely optimize my search, and in this case catch any event that is querying the user, regardless of what other arguments they use.
      4. Now that we have a query, let’s assess the volume that this detection may bring our SOC.
        1. Run your query for your whole dataset over a long period. A good bet is to usually look at 30-90 days.
        2. Every event that returns from your query would figure to be one alert, barring any suppression or filtering.
        3. For now, you will probably see very few results; if you see more than that there might be an issue ;)
        4. On the chance you’re doing this on your own host and you see a lot of events, you either need to refine your query, or apply some filtering. There will be a lab later in the course on filtering and a lab on building a detection that excludes noise.
      5. Now that we have a query, we can move forward with creating the alert itself.
      6. Before we get into the actual search syntax, we need to create a detection in Elastic.
        1. Head to Rules > Detection rules (SIEM)
        2. Remember, do not click “Add elastic rules”
        3. Select “Create new rule”
        4. We want the Rule type of “Custom query”
        5. For “Source” leave that as the Index patterns in question. If we had, for example, a Windows dataview set up we could specify this to limit the alert’s load when it runs- but we haven’t set that up.
      7. Optimizing searches, part 2

        In simple terms Elastic has to search all events for whatever you are querying. If you had dataviews specific to certain types of data, that would significantly reduce the overall logs that Elastic has to look for. For example, why would you bother searching across Linux logs for a Windows event? You wouldn’t.

        That is why having a Dataview for certain log types can be a powerful optimization tool. You have some of that now within your Elastic, but we won’t be spending time in this course working with Dataviews. They are however an improvement you should look at.

    4. “Custom query” is where your query itself goes

      1. In my case, process.name: net.exe and process.args: "user"
    5. For the Suppression section, suppress by the host.name and user.name for rule execution.

      1. This suppression will prevent us from constantly receiving alerts for what is effectively the same activity. If I run net user 6 times on a host in a couple minutes, we don’t need 6 alerts for that- 1 will do just fine, especially when you only need one good alert to find an attacker.
        1. Hit “Rule preview” at the top with a time range applied to see what comes back!

    6. Hit continue if the results look good and low.

    7. Now we’re prompted to enter information about the rule.
    8. Give the rule a name and description.
      1. I named mine “Net User Discovery”.
      2. The description I gave it is pulled directly from MITRE:

        Adversaries may attempt to get a listing of local system accounts. This information can help adversaries determine which local accounts exist on a system to aid in follow-on behavior. Commands such as net user and net localgroup of the Net utility and id and groupson macOS and Linux can list local users and groups. On Linux, local users can also be enumerated through the use of the /etc/passwd file. On macOS the dscl . list /Users command can be used to enumerate local accounts.

        1. As for the severity and risk score, you can set that as you like. A lot of this will depend on the environment, your specific security concerns, and other context like that.
        2. In my case I am setting the detection to Low with a risk score of 10. That is because to me, enumerating users on a host is not inherently malicious, there can be benign use cases for doing so.

  4. Open up the Advanced settings.

    1. For the “Reference URLs”, you can list any reference information that led you to create this alert, or would be useful for someone reviewing this detection or its alerts.
      1. I included the Atomic Red Team atomic link.
    2. “False positive examples” is fairly self explanatory. Add any examples you can think of where this alert may fire on a false positive event. In my case I put “A technician performing maintenance on a host” as an example.
    3. For the MITRE section, navigate down the list to add T1087.001. Note that you can add multiple tactics, but for this alert we really only need to add one.

    4. Down in “Investigation guide”, this is where you put a guide on how to investigate your alert. Spend some time thinking about how to investigate if this is malicious activity or not

      1. This will usually be easier if a rule was your idea. If that’s the case, there’s a reason you want to alert on something, so it will come naturally. However that won’t always be the case; you may be assigned to build a detection that wasn’t your idea. That is where your research step comes in.
      2. In this case, we’re building this to address that specific MITRE technique. Pull on that MITRE page for inspiration: https://attack.mitre.org/techniques/T1087/001/
      3. Here is what I put in mine:

    5. Under “Author” put your name.

    6. Under “Schedule rule” leave everything at the default settings.
    7. Continue on to “Rule actions”.
    8. Select “Jira” and as this is our first detection with Jira we will need to create the Jira connector now.
    9. On another tab, navigate to: https://id.atlassian.com/manage-profile/security/api-tokens
    10. Create an API token (note the expiration date), copy it, and return to Elastic. Select Create a connector.
    11. “URL” should be the URL of your Jira space. For example mine is “https://nynir.atlassian.net”.
      1. Do not include a forward slash at the end of the URL
    12. “Project key” is the KEY ABBREVIATION associated with your Jira project. Mine is “SOC” (no parentheses).

      Untitled

    13. “Email” is your Atlassian login email.

    14. “API token” is the API key you just created.
    15. Hit Save.
    16. Now we need to fill in the Jira ticket Action. This is configuring things like the Issue Type, the field values, and the priority that Jira will create this event as.

    1. You can set this up how you like, but how I set mine up can be seen below with examples on how to include the alert values from Elastic. At a minimum make sure you set the Issue type as “[System] Incident” or you won’t trigger the SOC workflow in your Jira.
      1. You also need to as a basic rule fill in the Summary and Description fields, but I recommend using dynamic fields for those. More on that below.
    2. You can click the side buttons to add in dynamic field values.

    3. You can also add field names as variables

      1. For more information on how specifically these variables work, see here: https://www.elastic.co/guide/en/kibana/current/rule-action-variables.html
    4. Here is what my Jira Action looks like:

    5. A quick explanation on my dynamic field values

      A “context” is just like it sounds; A part of something, surrounding things, et cetera. Elastic needs a context to look within for certain values.

      You’ll note though that while I have context.rule.description by itself, which will populate, there are a number of field values that if you look likely are within a context.

      Look for the {{#context.alerts}} to mark the beginning of that context, and {{/context.alerts}} to mark the end. This is telling Elastic “all the field values between these two tags are found within the context of context.alerts

    6. Text values for you to copy if you so desire:

      {{rule.name}}
      
      {{context.rule.description}}
      - -- Hits: {{state.signals_count}} ---
      {{#context.alerts}}
      Timestamp: {{kibana.alert.original_time}}
      Host: {{host.name}}
      User: {{user.name}}
      Process Id: {{process.pid}}
      Process: {{process.executable}}
      Command Line: {{process.command_line}}
      Parent Process Id: {{process.parent.pid}}
      Parent Process: {{process.parent.executable}}
      Parent Command Line: {{process.parent.command_line}}
      {{/context.alerts}}
      
  5. Hit “Create and enable” on your rule.

  6. Your detection should now look something like this

  7. Switch over to your VM and run the “net user” command from a PowerShell window.

  8. Within a few minutes (or sooner if you click Manual Run), if everything is done correctly, you should see:

    1. An Elastic alert

    2. a Jira ticket show up with your alert information! (If you set the severity of Low in Elastic, it also should not have an SLA)

  9. If for whatever reason you aren’t getting the above results, there are a few things you can check:

    1. If your alert is NOT showing up on the Elastic side:
      1. Make sure your query is matching the events properly. Take your alert query and drop it back in Discover to make sure your net user events are getting picked up there.
    2. If you event shows up in Elastic as an alert but is not showing up in Jira even after a few minutes:
      1. Make sure your Jira connection is configured properly.

Tips & Help

  • Editing a Connector
    • If you ever need to edit your connector, that can be found in the left side menu under: Settings → Connectors

Hardmode

Did you nail this lab right away? If so, here’s an additional challenge for you:

  1. Recreate the detection in EQL or ES|QL
  2. I encourage you to share those queries with the class after the lab via Discord

Additional References

Next:LAB 3.2: Testing Detections with Atomic Red Team