Type your search keyword, and press enter

Good List of Open Source Security Projects

This is a compilation of some excellent open source security projects.  I will continue to update this page. Insert in comments below if you have any good reference projects or open source security tools. I am excluding the obvious ones like Metasploit and Bro for example, in this list.

Platform / Host Security

OSQuery from Facebook

Reference Link: https://osquery.io/

Github linkhttps://github.com/facebook/osquery

Commercial Comparison: The commercial equivalent functionality is with Tanium.

Description: osquery gives you the ability to query and log things like running processes, logged in users, password changes, usb devices, firewall exceptions, listening ports, and more. It allows you to easily ask questions about your Linux and OSX infrastructure. Whether your goal is intrusion detection, infrastructure reliability, or compliance

OSSEC

Reference link: http://ossec.net/

Github link: https://github.com/ossec/ossec-hids

Description: OSSEC is an Open Source Host-based Intrusion Detection System that performs log analysis, file integrity checking, policy monitoring, rootkit detection, real-time alerting and active response

SIMP from National Security Agency (NSA)

Reference linkhttp://simp.readthedocs.org/en/latest/

Github linkhttps://github.com/NationalSecurityAgency/SIMP

Description: SIMP keeps networked systems compliant with given security standards. It is a configuration management and more importanly a means for automated compliance checking/validation with excellent out of box integration using Puppet, authentication with OpenLDAP, and other update options.

Cloud Security

Security Monkey from Netflix

Github linkhttps://github.com/Netflix/security_monkey

Description: Security Monkey monitors policy changes and alerts on insecure configurations in an AWS account.

CyberSecurity

GRR from Google

Github link: https://github.com/google/grr

Commercial alternative: FireEye/Mandiant’s MIR incident response platform

Description: GRR Rapid Response is an incident response framework focused on remote live forensics. It has a docker image for you to be up and running in ~2 minutes. It has cross-platform support for Linux, Mac OS X and Windows clients. It can perform live remote memory analysis using open source memory drivers for Linux, Mac OS X and Windows, and the Rekall memory analysis framework.

ThreatExchange from Facebook

Reference link: https://developers.facebook.com/docs/threat-exchange/v2.4

Github link: https://github.com/facebook/ThreatExchange

Description: More than 90 companies are now using Facebook’s cybersecurity platform, ThreatExchange, to share security and threat information. It is a set of RESTful APIs on the Facebook Platform for querying, publishing, and sharing security threat information including exchanging details on malware, phishing pages, and other threats with either specific members of the security community.

MozDef: The Mozilla Defense Platform

Reference link: http://mozdef.readthedocs.org/en/latest/

Github link: https://github.com/jeffbryner/MozDef

Description: The Mozilla Defense Platform (MozDef) seeks to automate the security incident handling process and facilitate the real-time activities of incident handlers. It allows for collaborative incident response, visualizations, and easy integration into other enterprise systems

Scumblr & Sketchy from Netflix

Github linkhttps://github.com/Netflix/Scumblr/wiki

Github linkhttps://github.com/Netflix/sketchy

Description: Scubmlr performs periodic searches and storing / taking actions on the identified results. Things to look for include compromised credentials, vulnerability / hacking discussion, attack discussion, security relevant social media discussion, etc. – anything that can help your security team keep tabs on security- and attack-related social media and Internet chatter. Sketchy works well with Scumblr by taking automatic screenshots, text scrapes, and html files before they can be taken offline. Such information can all be stored locally or on a S3 bucket on Amazon.

Skyline from Etsy

Github linkhttps://github.com/etsy/skyline

Commercial alternative: Anomaly detection system from Nagios

DescriptionSkyline is an real-time anomaly detection system to help security teams with scalable and passive monitoring of potentially hundreds of thousands of metrics. It is designed to be used wherever there are a large quantity of high-resolution timeseries which need constant monitoring. After Skyline detects an anomalous metric, it surfaces the entire timeseries to the webapp, where the anomaly can be viewed and acted upon.

AnomalyDetection from Twitter

Reference link: https://blog.twitter.com/2015/introducing-practical-and-robust-anomaly-detection-in-a-time-series

Github link: https://github.com/twitter/AnomalyDetection

Description: AnomalyDetection is an open-source R package to detect anomalies which is robust, from a statistical standpoint, in the presence of seasonality and an underlying trend.

RTIR REST API

Reference link: https://isc.sans.edu/diary/Automating+Metrics+using+RTIR+REST+API/20087

Github link: https://github.com/tcw3bb/ISC_Posts/blob/master/RTIR-phish-template.py

Description: RTIR is an open source ticketing system for incident response based on Request Tracker. This system can be built based on the Verizon VERIS taxonomy (to compare against Verizon DRIR reports) by creating custom fields that match the categories. This system supports using a REST API(3) to automate the creation of tickets

Securing the Human

Ava

Reference link: http://avasecure.com

Github linkhttps://github.com/SafeStack/ava

Description: AVA maps the realities of your organisation, its structures, and behaviours. This map of people and interconnected entities can then be tested using a unique suite of customisable on-demand and scheduled information security awareness tests. The results of this combine into a detailed risk profile of your organisation unlike any other tool can provide – from the people up.

Speaking at Black Hat USA 2015

Very excited to announce my selection and participation in Black Hat USA 2015 being held in Las Vegas this year. My talk is titled ‘Securing Your Big Data Environment’. Come join me in the South Seas CDF room in Mandalay Bay between 16:20 – 17:10 hours.

Link to Black Hat: https://www.blackhat.com/us-15/briefings.html#securing-your-big-data-environment

Ajit Gaddam Black Hat Speaker

Summary of the talk: Hadoop and big data are no longer buzz words in large enterprises. Whether for the correct reasons or not, enterprise data warehouses are moving to Hadoop and along with it come petabytes of data. How do you ensure big data in Hadoop does not become a big problem or a big target. Vendors pitch their technologies as the magical silver bullet. However, did you realize that some controls are dependent on how many maps are available in the production cluster. What about the structure of the data being loaded? How much overhead does decryption operation add? If tokenizing data, how do you distinguish between in and original production data? However, in certain ways, Hadoop and big data represent a greenfield opportunity for security practitioners. It provides a chance to get ahead of the curve, test and deploy your tools, processes, patterns, and techniques before big data becomes a big problem.

Come join this session, where we walk through control frameworks we built and what we discovered, reinvented, polished, and developed to support data security, compliance, cryptographic protection, and effective risk management for sensitive data.

Indicators of Compromise List and Recommended Security Measures

Unlike loss of a physical device, if an attacker breaks into your corporate network, you still have your data after they steal it. It is more important that ever to detect if your company has been broken into by a hacker. This article identifies a number of indicators of compromise activity on a corporate network. It is not an exhaustive list and I will keep adding to this list along with any recommended security measures you can take to detect and prevent activity that could lead to a compromise of your network by attackers.

Logging: When you log, you can detect and identify any unusual activity on your network and on the end points.

  • Look for logfile line count and log file line length. Have an average baseline of our log file size at a minimum and then trigger alerts when the log size increases or even worse decrease of events that day.
  • Look for spikes in traffic types (e.g. SSH, FTP, DNS) and baseline the number of events including bandwidth usage
  • Look for country of origin of IP connection (or by protocol)

Endpoint

  • Scan for the software/tools listed in “List of Publicly Available Tools used for Attacks” below. These include scanning for non-malicious network utilities like SysInternals and PsTools that are not rated as malicious by AV and others, but good tools for use by an attacker.
  • Scan for RDP Sessions in HKCV\Software\Microsoft\Windows\Shell\BagMRU and related keys
  • Scan for remote access services – VNC, RDP
  • Scan for remote access ports (TCP 3389, RDP or VNC)
  • Scan for batch files and scripts
  • Scan for multiple archive files – ZIPs and RARs including encrypted compressed files
  • Scan for rar/zip file compression in page files and unallocated spaces
  • Scan for programs run in the AppCompatCache
  • Scan for sysadmin tools executed such as tlist.exe, local.exe, kill.exe
  • Scan for files in the root of C:\RECYCLER
  • Scan for anomalies like abnormal source location or logon time (for example after say 7pm EST) and other time-of-use rules and baselines

Below are items that could indicate compromise or could indicate potential malicious activity on your network.

Network Inbound

Network Lateral

  • Detect fingerprinting of devices (so take out any authorized crawlers and put them in an exception group) but alert on any other device/asset polling the other assets in the network internally (bot, worm or someone crawling through your internal network)
  • Check Windows event logs for lateral movement across the network using native Windows commands net view and net use

Network Outbound

  • Detect endpoint attempts to access a website URL using IP address rather than using a FQDN. Think how many users in your network type in 173.194.73.106 for www.google.com in their web browser?
  • Detect endpoint attempts to access a non-routable IP address
  • Detect endpoint attempts to access the internet via non-proxied ports in an enterprise
  • Monitor increase in encrypted data outbound whether it is traffic over 443 or encrypted emails outbound. Also monitor for non-SSL traffic going to port 443
  • Monitor outbound communication via odd ports, protocols, and services (engress filtering)
  • Detect for ZIP, RAR or CAB formatted files outbound. These can be identified via their headers.

List of Publicly Available Tools used for Attacks

A number of publicly and freely available tools on used by attackers to target your network and to steal data from your company. Sometimes these are custom tools and others are legitimate tools employed by your system administrators and may not stand out as suspicious.  A list of such tools including some sourced from the Mandiant M-Trends Report.

Tool Name

Type

Description

ASPXSpy

Remote Access

Can perform remote command execution, upload/download files, interact with SQL databases, query registry keys, perform port scans

Gh0st RAT

Remote Access

Backdoor with a graphical client builder and server

Poison Ivy

Remote Access

Backdoor with comprehensive remote access capabilities on a compromised system. Has a graphical mgmt. interface

Radmin

Remote Access

Popular remote administration tool

Xdoor

Remote Access

Backdoor with key logging functionality, audio/video capture, file transfers, HTTP proxy, system information retrieval, reverse command shell, DLL injection and command execution

ZXshell

Remote Access

Backdoor includes key logging, file transferring, SYN floods, can launch processes, steal credentials and disable local firewalls

Cachedump

Privilege Escalation

Obtains password hashes for domain logins that are cached in the Windows registry

GetHashes

Privilege Escalation

Obtains password hashes from the SAM file

Gsecdump

Privilege Escalation

Obtains password hashes from Windows registry, SAM file, cached domain credentials and LSA secrets

Hookmsgina

Privilege Escalation

Hooks into the MS GINA (msgina.dll) and dumps the username, password, domain to a file

Incognito

Privilege Escalation

Performs Windows access token manipulation

Pass-the-Hash toolkit

Privilege Escalation

Accesses hashes of users who have interactively logged into a system and allows an attacker to impersonate those users by using those hashes to other systems

Pwdump

Privilege Escalation

Obtains password hashes from the SAM file. Many of the password dumping tools are variants of Pwdump

Windows Credential Editor (WCE)

Privilege Escalation

Can grab current sessions, modify credentials, and perform pass-the-hash

Htran

Port director

Can take incoming traffic on one port and send it to a specified IP and port on another system

PsTools

Lateral movement

Ability to remotely invoke executable file across a network. Part of SysInternals tools (esp PsLoggedOn, PsExec, PsService, PsInfo)

Does Using Google Libraries API CDN give you Performance Benefits?

Google Libraries API CDNA CDN – short for Content Distribution Network helps serve content with high availability and provides performance benefits along with faster page load times. The Google Libraries API is a CDN for serving the most popular, open-source JavaScript libraries and allows any website to use it for free. Examples include Dojo, jQuery, Prototype among others.

Why use a CDN?

The greatest benefit is from caching. The theory is that if a visitor visited a site that was loading their JavaScript libraries, say jQuery for example from the Google CDN, then when they visit your website, the library is already in that user’s browser cache and will not have to be downloaded again. This sounds great in theory. I decided to put this theory to the test for the popular jQuery library and how likely is it where a visitor will arrive at your site with a cache for that JavaScript file.

Tracking Performance with HTTP Archive

In order to track the speed of the web over time, Google built and developed the HTTP Archive as an open source service. They transitioned the ownership and maintenance of it to the Internet Archive. It is a permanent repository of web performance information such as page size, requests made, and technologies utilized. Their list of URLs is based solely on the Alexa Top 1,000,000 sites. As of March 2012, there were a total of 77082 sites analyzed. This count is expected to ramp up to cover the top 1 Million websites on the Internet soon.

Google Libraries API CDN

Running analysis of the HTTP Archive for the usage of Google Libraries API, we got a count of 14345 sites of the 77082 sites tested. This is almost 18.61% which is quite impressive. However, this count does not account for the different libraries and their versions. This is important as sites must reference the exact same CDN URL from either Google or Microsoft or any other CDN to obtain the cross-site caching benefits.

In case you were curious, the Microsoft CDN is only used in 157 of the sites tested or only 0.2% of the most popular websites on the Internet leverage it compared to the 18.61% that leverage the Google Libraries API CDN.

Sites using Google Libraries API
Sites using Google Libraries API

Test 1:  Validate the Google APIs reference

We want to verify the 18.61% of the pages. This count of 14345 sites comes from an HTTP request containing ‘googleapis.com’ anywhere in the URL. The query used to extract this particular statistic was

SELECT COUNT(DISTINCT pageid)
FROM requests
WHERE url LIKE '%googleapis.com%';

Google API reference in HTTP Archive

Test 2: Determine the percentage of pages that ran a jQuery version

Next, we want to determine the percentage of pages that was using atleast some version of  jQuery from the list of the sites in the HTTP Archive. Running the following SQL query gives us the result of 11145 or 14.45% of the total sites analyzed.

SELECT COUNT(DISTINCT pageid) 
FROM requests 
WHERE url LIKE 'http%://ajax.googleapis.com/ajax/libs/jquery/%';

jQuery Count in HTTP Archive

Test 3: Determine jQuery versions

Next, we want to determine the breakdown for each distinct URL that represented jQuery. Again, this is the statistic that would determine the cross-site caching benefit. Running the following query gives us the breakdown below:

SELECT url, COUNT(DISTINCT pageid) AS count
FROM requests 
WHERE url LIKE 'https://ajax.googleapis.com/ajax/libs/jquery/%' 
GROUP BY url 
ORDER BY count DESC;
Version Protocol %age Count
1.4.2 http 2.19% 1695
1.3.2 http 1.48% 1142
1.7.1 http 1.34% 1039
1.6.2 http 0.69% 533
1.5.2 http 0.68% 529
1.4.4 http 0.61% 474
1.6.1 http 0.58% 450
1.6.4 http 0.53% 414
1.7.1 https 0.47% 366
1.4 http 0.40% 316

This proves that the fragmentation issues are very real. This reveals that the most popular URL for loading jQuery was jQuery 1.4.2 via http. This constitutes only 2.19% (1695 out of 77082) of all the websites tested. The next most popular is jQuery 1.3.2 with 1.48% (1142 out of 77082) and so on. It is not just the fragmentation of the different libraries used, but also the different protocols (e.g. http vs. https) since they are cached separately.

Another frustration is the “latest version” reference results. This allows you to request either version 1 or 1.x and automatically receive 1.x.y version.You can see the full output of the above query posted on GitHub if you are interested.

Conclusion & Recommendations

So, is using a CDN like Google Libraries API really provide you the performance boost via cross-site caching? The answer is most conclusively proven to be NO and is not likely to benefit your first time visitors. The assumption that many of your website’s first time users will have a JavaScript cached – because they happen to visit another site that uses the exact same version of the same JavaScript library is wrong as shown by the fragmented nature of the web above. It also depends on how likely your visitor target demographic matches that of the sites using the CDN.

  1. You are most likely better off bundling jQuery with the rest of your site’s JavaScript
  2. Make sure to use the Expires header to make HTTP requests cacheable since for your repeat visitors, it doesn’t matter where the file was served from
  3. Browser users leveraging privacy options that clear the cache (e.g. privacy.clearOnShutdown.cache option) between browser sessions cannot leverage the assumed benefits of a CDN distribution
  4. Another point of note coming from Steve Souders is that the amount of disk space for caching has not caught up with people’s usage of the web. For example, IE has a default cache size of 8 – 50 MB, Firefox has 50MB, Opera has 20MB and Chrome has ~80MB. Mobile phones have an even smaller size limit. This cache will eventually fill up and when it happens, the FIFO (first-in-first-out) rule applies where cached resources need to make way for new ones.
  5. If websites are using the https reference to a JS library, the browsers usually default to not caching those files to disk when they are retrieved using SSL.
  6. If you are still considering experimenting using the Google Libraries API or another CDN for libraries, use something like Yahoo’s Boomerang. This piece of JavaScript measures a whole bunch of performance characteristics of your user’s web browsing experience. All you have to do is stick it into your web pages and call the init() method.