Storing GitHub Org Auditlogs in Elasticsearch

I had a need to generate an alert when someone overrode a Branch Protection setting. To do this I decided to pull some of the GitHub Auditlog into Elasticsearch. Theres a GitHub API client written in sh, called ok.sh, which can be found here. At the time it didn’t support querying the Org Auditlog, so I PR’d that here. Once the PR was in place, I wrote a Dockerfile to create a container to deploy on Kubernetes.

Packet Capture using tcpdump on Kubernetes Pods in Azure AKS

Assuming the target containers can actually install new software (apt install is available) what follows is a quick and very dirty method to run tcpdump on k8s/AKS containers in Azure. If you’re running Kubernetes 1.23 and up, please read this instead: https://downey.io/blog/kubernetes-ephemeral-debug-container-tcpdump/ Install some needed utilities Use whatever pod label is required to target the right pods. kubectl get pods -l <LABEL> -o name | \ xargs -I{} kubectl exec {} -- apt-get -y update Install tcpdump, screen, psmisc, and rclone

Azure Translation Services with Elasticsearch and Logstash

Recently had a need to parse some XML into Elasticsearch, specifically some CERT RSS feeds. Logstash has an RSS input, but it’s a bit basic, and doesn’t provide any language indications if the RSS feed includes them. One of the feeds I’m using can be various languages per item, and many others are non-english entirely. Since this would be parsing the same RSS feeds repeatedly, I need a predictable Document ID, so I can re-insert / upsert the item, and not create a new document every time the RSS feed is parsed.

Fedora CoreOS 35 USB Boot on Raspberry Pi 4

Note: While the RPI now boots, if I have anything else plugged into USB ports, such as a keyboard, it throws the same TRB related error. For whatever reason, Fedora’s support of Raspberry Pi4’s seems a bit iffy. The official documentation (here) is quite good, and I managed to easily get the RPI4 booting CoreOS via the EDK2 UEFI firmware approach. The problem was that I wanted to use the U-Boot approach, and that was just not playing ball.

Modsecurity, DetectionOnly and enforcing select rules

I recently had a reason to want to achieve the following: ModSecurity globally in DetectionOnly mode (not enforcing rules, just logging) Continue to operate the CRS in DetectionOnly mode. For a specific ruleset: Enforcing a default deny on inbound requests to an API. Enforcing allow rules for specific API routes and methods of the API So I wanted all of our inbound CRS rules to continue to work in DetectionOnly mode, while I had a custom set of rules that would deny all access, with a set of whitelists to specific methods/paths.

Alerting using SIEM Detections and ElastAlert2

ElasticSearch SIEM Detections and Alerts and Actions are quite useful features, except for the fact that actual alerting is behind a license paywall. So while both of these features can run rules, check for conditions, and record the results in an index, neither of them actually provide alerting support. Alerting requires a Gold License, which if alerting is the only thing you want, is an excessive cost. If you can’t move off ElasticSearch to OpenSearch, which has Alerting available for free, you can use tools such as ElastAlert21 to handle the Alerting requirements.

Using Elasticsearch Upserts to Combine Multiple Event Lines Into One

Note This approach is probably not appropriate for high volume / high throughput events. It required in my case quite a lot of Logstash parsing, and Elasticsearch doc_as_upsert use, both of which will have a significant performance penalty. For low throughput use it works fine. Sometimes log sources split logically grouped events into separate lines, and sometimes those logically grouped event lines are mixed into the same log file with actual singular line events.

Event Threat Enrichment using Logstash and Minemeld

At my work we use the Elastic Stack for quite a few things, but one of the more recent-ish “official” roles is as our SIEM. Elastic introduced SIEM specific funcationality to Kibana a few releases ago, around 7.4 if I rembember correctly. One of the features that the Elastic Stack doesn’t really support well (yet) is an enrichment system. They did introduce an elasticsearch side enrichment system in 7.5, but in my opinionn theres a few problems with it:

Querying Cylance Protect Api From Shell

We use Cylance as our AV type protection. They’re one of the better solutions I’ve seen, but theres some strange gaps in my opinion. There doesn’t seem to be a built in method for alerting. One of the things we’d like to be able to alert on is when a devices goes “offline”, and apparently this information is not provided through Cylance’s syslog output. It is however available from their API.

Kibaba Authentication using OAuth2 Proxy in Kubernetes

NOTE: There appears to be a bug with Kibanas impersonation features, and SIEM detection rules (and possibly elswhere): https://github.com/elastic/kibana/issues/74828 Recently I had reason to want to integrate Kibana with Azure Active Directory for authentication. This might be easily possible if you have a commercial license with Elastic, but this wasn’t the case this time. After a little bit of research I found this article, from February 2017: User Impersonation with X-Pack: Integrating Third Party Auth with Kibana

Elasticsearch Provided Name and ILM

As I was learning a little about ElasticSearch’s ILM (Index Lifecycle Management) feature, I ran across a parameter called provided_name when examining an index. A bit of searching turned up the this Github issue, but it doesn’t really explain where it comes from. A bit more searching led me here. So provided_name is a method of templating index names it seems, using date math as explained in the documentation. Just make sure to HTML encode the index name, as per the example from the documentation: