General Science, Technical

Privacy and Identity

I have always been a (relatively) cautious man when it came to providing personal information online. I know these words come from the owner of calebshortt.com, but willfully disclosing information online is different than providing information that is to be kept private. Different rules apply. Some information can leak though – such as your name in news articles, academic papers, crawlers that scrape social media, etc.

In many talks, courses, and my own discussions I am seeing a trend where the “traditional” sense of privacy, where the idea it to not provide any information unless it is required, is shifting (with the help of social media). This “minimalistic” mentality is great for restricting the dissemiation of personal information – especially online. However, the “new” sense of privacy gravitates towards liberally providing personal information and having complete control on how that information is accessed by third parties (or the original holder of the information).

This is a big change, and it can lead to disastrous outcomes.

Take myself, for example, I limit the information that I add to social media websites (if I use any at all) and I make sure to continually review the privacy measures for each one. I try to apply the best of both “traditional” and “new” privacy approaches. This can work fantastically if you explicitly trust the website (or group) to secure your private information. I even run queries on my name in various search engines to see what are the results. This gives me a rough measure as to how exposed I am to crawlers.

What I did not expect is that, after taking care to secure my own online “identity” and my private information, the weakest link would be my government. I am talking about the current situation with the Canadian Student Loan information breach. A removable hard drive with the personal information of over half a million current, or previous, students disappeared. I was shocked. I suppose I shouldn’t have been.

All of my hard work; circumvented by the carelessness of a person I had never met – someone that I never knew was even handling my personal information.

Through my frustration I have come to be reminded that the weakest link in most security, or privacy, chains is the human link. The link that requires a person the have the correct training, common sense, and authorization to access, transport, and dispose of my personal information correctly and securely.

In my case, this is the second time that a major organization has “lost” my personal information due to a removable hard drive: Note that removable hard drives are usually restricted in general for this reason.

All I can do is take the necessary precautions – now that it’s out there.

Standard
General

“Smart”

When I tell someone that I am a Computer Scientist, and that I am working towards finishing my Master’s Degree in it, many of them remark on how “smart” I must be to achieve such a goal. I am taken aback by this response as I do not view myself as any more intelligent than they are. What, then, makes Computer Scientists fall into such an automatic assumption?

The answer may lie, not in the intelligence of the individuals, but in the way that they interact with their surroundings. Their world.

I am a Computer Scientist, but my skills do not fall solely within that realm. I am an avid baker. I surf and skateboard. I am mechanically inclined and can fix my own vehicles. I can play multiple instruments. I am known to write occasional prose and poetry. I read frequently – and in various topics. I keep up in current events. I have an extensive knowledge of movies and music. I play billiards at the competitive level. I am an amateur scotch taster.

The question is why did I decide to develop these hobbies and skills? The answer, for me at least, is that I was curious. I started baking bread because I was curious how it would work out. I got quite good at it through trial and error. Now, I can bake a decent loaf or two with no trouble at all. I have even made artisan loafs at the request of friends. When I saw a Youtube video of someone playing the ukelele I thought that it would be fun to play. I went to the music store, bought a cheap ukelele, and started to play some basic tunes from online tutorials. Now I can play a variety of songs – which goes well for when I’m surfing.

Many Computer Scientists are just like me. It is unacceptable for them to “not know” what to do if they need to, say, sharpen a knife. They will go out and learn how to sharpen their own knives. If there is a problem, they try to fix it. If there is something they do not know, they try to learn about it so that, next time, they will know. We are constantly learning. This might be brought on by such a fast-paced field – where first-year textbooks can be outdated before the students graduate.

This trait is not limited to Computer Scientists. There are many who are driven to better themselves. Sure, it takes some grades to get into Computer Science, but it takes grades to get into many fields of study. The “smart” that seems to be automatically associated with Computer Science may derive from this need to better ourselves – and solve problems. This builds a large skill-set that helps us solve even more problems.

And solving problems is something that we are very good at doing. Maybe that is what “smart” is after all.

Standard
Technical

Safety Assurance Cases: Making Compelling Arguments for Software Safety

It happened! Your company just pushed your new software product out. Months (or years?) of hard work culminate into this single moment. Your “baby” is now out in the wild – there to fend for itself.

It is perfect. You know it is. You and your team are using the latest in safety-rigid development life-cycles. You even pushed back your release date to finish your extensive test suite. There can’t possibly be a safety issue with your software.

The phone rings. There’s been an “incident”.

Safety in software is a troublesome topic. Much of safety testing and “assurance” relies on subjective and incomplete data or processes. These may leave significant voids in the identifying, analysis, and mitigation of safety “threats”. Test cases rely on the developer thinking of the possible ways that the test subject could be used, but what if the safety threat came from “legitimate” (passed validation) user input?

Tim Kelly argues that Safety Assurance Cases may be able to fill in the gap – and more.

Safety Assurance Cases are evidence-based arguments that prove (or disprove) the safety of a software system in a given context. They include technical and non-technical aspects of the system. This allows them to identify more than just the safety threats associated with validation, etc.

Kelly et al, in the paper “Introducing Safety Cases for Health IT”, describe the Safety Assurance Case in respect to the healthcare field. They discuss the evolution of the Safety Assurance Case along with the software (from requirements to deployment and maintenance). The process of claims is supported by arguments that are further supported by evidence. At the highest level is a claim about the overall safety objectives. This claim is supported by a series of arguments that further claim that some sub-aspect has been addressed. These arguments are further supported by evidence about the sub-aspects being addressed.

What Safety Assurance Cases do is give a solid argument for the safety of a system. They also identify weaknesses in the safety arguments of a system that may require additional evidence to support. Thus a Safety Assurance Case contributes externally and internally for the purpose of safety in the software system.

Standard