Hi. My name is Courtney.

I’m obsessed with making everything more efficient.


Resume | Projects


After blindly attempting to train the classifier (as seen in Part 2), I had a good long hard think about all the variables and all the possible combinations of things I could try and tweak to get results out of my naive Bayes classifier. I also had a good long hard think about how I would measure those results. What are good results? This is an important thing to consider in machine learning. You may think that accuracy is all that matters — but that’s not the case.

In this experiment, we’re attempting to classify observations into one of 34 different bins. If we look only at accuracy, we see only part of the picture. We can’t answer questions such as “is one category often misclassified as another category?” or “how many different categories does the classifier bin things into?” These questions have answers that are helpful in “debugging” our classifier, i.e. determining why it’s classifying things the way it is, and how we can improve our data and strategy in order to improve its accuracy. This is why confusion matrices are useful tools for visualizing the result of classification. Confusion matrices explain the how the errors in classification are distributed among the possible classes, shedding light on the overall behavior of the classifier.

If you’ll recall, our dataset class distribution looks like this:

CWE label distribution in the CGC dataset

The labels are heavily skewed toward CWE-121. A classifier could achieve high accuracy in this dataset by simply guessing CWE-121 for every observation. This is a type of overfitting. If the data does not have a lot of descriptive power, i.e. the features I’ve chosen to extract from the binaries are not related to the CWE classes, I would expect the classifier to have this behavior.

To validate this, I made the decision to perform a trial where I trained the classifier on the dataset, but randomized the labels. This trial provides a baseline that can be inspected to compare subsequent, properly trained trials to in order to differentiate their results from random noise.

With this in mind, I also identified a few different parameters to tweak in order to potentially improve classifier performance. We can perform classification at the basic block level or at the binary level. We can also choose to use a threshold to discard “uninformative” basic blocks, as described in the previous post. In addition, there is more than one naive Bayes classifier variant. Both Bernoulli naive Bayes and multinomial naive Bayes are applicable classifiers for our data. Finally, we can represent our features in two different ways: we can use a boolean representation for each opcode in each basic block, i.e. whether or not that opcode was present, or we can use an integer, i.e. how many times that opcode was present in the block. I attempted to exercise a reasonable combination of these parameters when training and evaluating the classifier.

Throughout the rest of this post, you will see results for three separate classifiers; one a Bernoulli naive Bayes classifier trained on the binary-featured dataset, one a multinomial naive Bayes classifier trained on the binary-featured dataset, and one a multinomial naive Bayes classifier trained on the integer-featured (also referred to as “event-featured”) dataset.

Tuning The Threshold

As a refresher, in the previous post I talked about a clever way we could use the probability estimates returned by naive Bayes to identify basic blocks that are common across many CWE classes vs. those that are unique to a CWE class. The idea is that if the classifier thinks each of the classes is equally likely for a block, then that block is not informative for classification and can be discarded. Conversely, if the classifier is very confident that a block belongs to a particular class, then it is informative and should be kept.

If we’re going to use this idea to discard basic blocks, we first need a way to measure this difference in probabilities between the classes. I chose to use the difference between the highest probability class and lowest probability class returned by the classifier. If the difference is small, the distribution is relatively even across the classes — if the difference is large, it is likely to be uneven. Semi-formally:

m=x_highest — x_lowest

Now that we have a metric, we need to know at what value blocks become “informative” vs. “uninformative.” In other words, we need to know what our threshold is. A good way to determine this is by training the classifier and seeing how classification performs with respect to different threshold values. This is a type of classifier tuning similar to that done by K-nearest Neighbors when choosing a value for K. This type of tuning requires a validation data set.

Typically, when training a classifier, you need at least two datasets: a training data set, with which to train your classifier, and a testing dataset, to evaluate your classifier’s accuracy on. But when you want to test your trained classifier in under different conditions and pick the best condition as a parameter of your classifier, you need an additional dataset called a validation set in order to maintain the independence of the testing data from the process of training. Otherwise, you risk overfitting your classifier to your data, and will likely produce results in your experiment that translate poorly to unseen data.

To tune the threshold, I trained each of the three classifiers on their respective data and plotted their accuracy with respect to different threshold values. I created two plots — one where the accuracy was calculated with respect to basic block level classification, and one where the accuracy was calculated by having the “informative” basic blocks “vote” on what they believed the binary they came from should be labeled as.

Accuracy curve with respect to threshold for each naive Bayes classifier applied to basic blocks
Accuracy curve with respect to threshold for each naive Bayes classifier applied to binaries

The accuracy curve per basic block looks kind of like garbage — each of the classifiers performs fairly differently. However, when voting is applied to classify whole binaries, we see each classifier’s accuracy peaks at a threshold value of about 0.5. Therefore, we choose this as our threshold value for each classifier.

Also, for fun, I plotted the distribution of the threshold metric. If I’m right about some basic blocks being common to all the CWE classes and others being unique, we should see a bimodal distribution of this metric.

Threshold metric distribution for Bernoulli naive Bayes on binary-featured dataset
Threshold metric distribution for multinomial naive Bayes on binary-featured dataset
Threshold metric distribution for multinomial naive Bayes on integer-featured dataset

The bimodal distribution hypothesis holds, if only barely. It’s not very pronounced, but it can be seen in each of the three distributions. This is useful information to have, because we could have chosen other metrics by which to create a threshold. It’s possible that another metric would plot a more emphatic bimodal distrbution. Likely such a metric would perform better as a threshold.

Calculating The Baseline

The next thing to do is calculate the accuracy and confusion matrices for the baseline trial with the labels in the dataset randomized. I calculated this for each of the three classifiers, and each returned a very similar confusion matrix. One is shown below.

An example confusion matrix produced from random noise baseline testing

You’ll notice that the classifier is classifying nearly everything as CWE-121, which is exactly as expected. We’ll try to improve on this overfitting with our different training strategies.

The accuracy of each classifier under different conditions was also pretty similar between classifiers in the random trials. To calculate these accuracies, twenty trials were run. The data was randomized differently for each trial and split into training, validation, and testing datasets containing 1/3 of the data each.

Baseline accuracies for the different classifiers, trained on randomly labeled data

Unsurprisingly, using a threshold on the randomly labeled data reduces the accuracy by an enormous amount. And while it’s not shown here, it’s also worth noting that using a threshold on this data reduces the dataset by a significant amount. We expect more of the data to be preserved during normal trials.

Training The Classifiers

Finally we’re ready to evaluate our classifiers on real data with some different parameters. I evaluated each classifier under three different conditions — first, training and testing for basic block classification without any threshold value, second, basic block classification with a threshold value of 0.5, and third, whole binary classification with a threshold value of 0.5 using the basic blocks to vote on a label for their binary.

Each of these conditions were tested with twenty trials of randomized data, training/validation/testing split of 1/3 of the dataset for each. The results for each of the three classifiers are shown below.

Bernoulli naive Bayes accuracy on binary-featured dataset
Multinomial naive Bayes accuracy on binary-featured dataset
Multinomial naive Bayes accuracy on integer-featured dataset

The first thing you’ll notice is that the accuracy of each of the classifiers at basic block classification is lower than the random noise. This seems like a bad sign, but it is also odd. Any sort of significant deviation from the random baseline implies that the classifier is picking some kind of pattern out of the data. We turn to the confusion matrices to try to diagnose the difference between the random baseline and the actual run.

Confusion matrix for Bernoulli naive Bayes applied to binary-featured dataset for basic block classification

While the random baseline classifier overfit to CWE-121, there is some evidence here that our properly-trained naive Bayes classifier does not overfit as strongly. In particular, CWE-119 and CWE-416 are guessed quite often as well. In addition, we are able to correctly classify a significant number of blocks from CWE-416 and CWE-122, in addition to CWE-121. Unfortunately, this also causes many CWE-121 basic blocks to be incorrectly guessed as other classes. Realizing that this is likely due to the poor labeling of the dataset, it seems we can say that there is some predictive value in the opcode features extracted from the basic blocks, though there’s too much noise for the classifier to produce an acceptable accuracy.

The other unfortunate observation about the classifier accuracies is that applying a threshold does not increase the accuracy of the classifier. In the best case, it only reduces it by about 0.01 — which is a marked improvement over the baseline, but not terribly helpful. Voting even further decreases the accuracy, debunking the theory that we can use the probability outputs from naive Bayes for whittling down the list of informative basic blocks.

The Takeaways

As with all good science, just because something doesn’t work out the way you want or expect it to, doesn’t mean there isn’t a reason. I looked through the documentation for the naive Bayes classifier implementation I used from scikit-learn, attempting to find something to help me gain some insight into the probability outputs, and ran across this lovely gem:

Note from sckit-learn.org about naive Bayes classifiers

…so the idea of using the probabilities is fundamentally flawed, and in order to implement this, I need to use another classifier. Back to the drawing board. However, this experiment hasn’t been a waste. I’ve learned valuable information about my data composition, and eliminated a possible classifier from the list of candidates. Better, I learned that the data does have some informative value! Next up is identifying an appropriate replacement classifier and continuing the research.

In my previous post, I talked about the data processing needed to turn a bunch of binaries into a dataset for use with a machine learning classifier. While I said that talking through machine learning basics is out of scope for this series, I do want to talk through a bit about how the Naive Bayes classifier works, why I’ve chosen it, and how I plan to exploit the particular way in which it handles my data to do some cool things.

If you recall, we have a dataset whose rows looks something like this:

Depiction of a single row of the dataset

Naive Bayes is an inference classification algorithm. Inference relies on logical properties in order to provide a guess as to how likely it is that a given observation should belong to a particular class. In our case, Naive Bayes will be telling us how likely it is that a particular basic block belongs to one of our CWE classes. It will be doing this by looking at the opcodes present in that basic block, comparing that to basic blocks that it saw when it was trained, and returning a probability for each possible CWE class indicating how likely it is to be that CWE. We can use this to do something clever.

There is something fundamentally wrong with the data. If you haven’t spotted it already, don’t worry, it’s subtle. When I created the dataset, I chose to create many observations from a single binary. Namely, I created one observation for each basic block in that binary. Vulnerabilities are generally localized to a small set of the basic blocks within a binary — however, I have labeled entire binaries with a CWE class. This means that there are many basic blocks, i.e. observations, which are incorrectly labeled, as they do not actually contain a vulnerability. That poor labeling is going to affect the training of the classifier, and in turn, adversely affect accuracy.

How the data labeling is simplified from the “true” labeling (left) to the dataset labeling (right)

Knowing what I do about binaries, though, I can use our classifier to cheat a little. What I know is that binaries often have some similar code, e.g. setup and teardown code, string manipulation code, console or network code, etc. Binaries which serve similar purposes often use similar code and therefore share more similar blocks, and conversely binaries which do drastically different things have less blocks in common. I make the hypothesis that blocks that are common amongst binaries from many different CWE classes are not indicative of a vulnerability, whereas blocks which are more unique to a CWE class are good indicators of that vulnerability. Because naive Bayes is an inference algorithm, and it returns for us a probability for each CWE class when classifying a basic block, we can inspect this probability distribution at classification time and determine if that block skews heavily toward a single class or if the classifier thinks it’s seen it before amongst many different classes. For example…

If we have five different CWE classes we are classifying across, a block which has been seen before in all five CWE classes might return a probability from naive Bayes something like this:

Class 1: 20%
Class 2: 23%
Class 3: 21%
Class 4: 19%
Class 5: 17%

On the other hand, a basic block that has only been seen before in one category might return a distribution like this:

Class 1: 2%
Class 2: 3%
Class 3: 93%
Class 4: 1%
Class 5: 1%

It’s worth noting that while I say “a basic block that has been seen…”, in reality any basic block that is similar-ish to a basic block that’s been seen before, i.e. differs in handful of features, will be classified in the same way. That’s the power of machine learning — we train our classifier to recognize things it’s never seen before, using data that is hopefully representative of the problem space.

So now that we understand how the classifier is being used, let’s give it a go!

Trial 1: Start Simple

For the first attempt, I kept everything as simple as possible. I ignored all of the clever stuff I mentioned about using probabilities to classify blocks as significant or not. While I had intuition that those things would be important, it’s important to challenge and validate your assumptions before acting on them. So for this trial, I randomized over the entire dataset, not bothering to keep samples from the same binary together, and I simply had my classifier predict the CWEs it thought each block had.

This did not go well.

I mean it could have gone worse. My classifier achieved a ~25% accuracy, predicting across 27 CWE classes. I plotted a confusion matrix, so I could see if it was doing anything shady, like only classifying samples as class 121 or 122 (because if you recall, those are the classes that the data was skewed toward).

Confusion matrix of CWE classification, Trial 1

The errors are all over the place. Most classes do get misclassified as 121 more often than others, but not overly so. The fact that the errors are so spread out across the different classes is encouraging, because it may indicate that my approach has credence. It’s possible that the basic blocks that are getting misclassified are pretty common across most classes, and therefore get misclassified as any of them.

To figure out if this was true, I took a look at the probabilities that were generated during classification. I wanted to get a general sense of what the probability spread looked like, so I calculated some basic stats.

Highest probability ever seen: 0.9560759838758044
Lowest probabilities ever seen: 2.4691830828330134e-09
Average difference between in-sample highest and lowest probabilities: 0.32504693947187246
Standard deviation of in-sample probability differences: 0.06998244753425269

Hm. It seems like on average, the classifier is pretty confident about what it’s classifying. But the average could be skewed by it being confident classifying the blocks with vulnerabilities, and far less confident in the others. If that were the case, I’d expect to see the bi-modal distribution of the in-sample probability differences. So, let’s plot it.

Distribution of in-sample highest and lowest probability differences

That could somewhat be considered a bi-modal distribution. Not quite the separation that I would have hoped between the two modes, but there is certainly a second spike in the graph. You’ll notice that our average sits right in between the two spikes, at around 0.3.

At this point, I still have no idea if the data points that have a higher probability difference actually classify better than the ones that do not. There’s one easy way to find out, though — try it. I wrote some code to drop classified blocks that had an in-sample probability difference of less than 0.4, and then looked at the results.

Trial 2: Probability Inspection

Correctly identified 90/347
Accuracy: 0.259365994236

Not terribly convincing. Let’s try upping the threshold to 0.45.

Correctly identified 42/137
Accuracy: 0.306569343066

Accuracy is increasing, but not by as much as I’d like. And the confusion matrices still have the same general spread. It honestly doesn’t look much at all like the difference between the highest and lowest probability in a prediction is correlated at all with the accuracy of the classifier. I gave it another run at 0.5.

Accuracy is increasing, but not by as much as I’d like. And the confusion matrices still have the same general spread. It honestly doesn’t look much at all like the difference between the highest and lowest probability in a prediction is correlated at all with the accuracy of the classifier. I gave it another run at 0.5.

Correctly identified 17/67
Accuracy: 0.253731343284

…and the accuracy went down. I’m willing to believe that it’s just not going to have an effect.

It’s time to take stock of where we are and reform an approach. It appears that my approach utilizing the predicted probabilities to weed out uninformative basic blocks may not work, however I would like to try not splitting binaries across training and testing data to eliminate the possibility that this is preventing the classifier from learning what common basic blocks are. It seems unlikely, but it’s possible. It’s also possible that our data needs better labeling to be useful — this will take time, time that I don’t believe I have for this project. We can re-featurize the data to include counts of the number of times an instruction occurred in the basic block, rather than just whether or not it occurred. This will give the classifier more information to work with, which may improve its accuracy.


For the past couple of months, I’ve been heavily immersed in a graduate level machine learning course as part of my degree program. I haven’t been able to post about the cool work I’ve been doing because that would enable others to cheat (tsk tsk), but now I’m working on my final project and I would like to informally share my experience training a machine learning classifier to identify vulnerabilities in binaries.

Before we get too deep into this, please be aware that this is an ongoing project and I do not currently know if this technique will work. This series is meant to provide some insight into the practical application of machine learning, regardless of whether or not the results are positive (that’s science!).

The machine learning techniques we covered in the course are considered classical machine learning. We covered supervised and unsupervised learning and implemented several well-known algorithms from scratch. I’m not going to teach you the basics of machine learning in this post — if you’d like to learn how to choose a classifier or how machine learning algorithms work, I recommend perusing this curated list of tutorials. Because the course focused on classical techniques, I chose to also focus my attention there and ignore the more shiny neural networks and deep learning options. My project sets out to show that a straightforward inference technique, Naive Bayes, can be used to provide valuable information for a real problem. That problem is identifying vulnerabilities in binary code.

Binary code is the 0’s and 1’s that a program is made up of. If you’re a software developer, it’s the thing that your source code gets compiled into. If you’re a regular user, it’s the thing that you double click to launch an application. Binary code is made up of machine instructions that your CPU executes. These instructions essentially have two parts: the opcode, and any arguments.

A simplified view of CPU instruction

All of these instructions and opcodes are packed into your binary file one right after the next. Different instructions take up more space than others, and their arguments can also be of varying length. If you open up a binary in a text editor, it looks something like this:

Part of a binary file, as displayed by a text editor

Because the size of instructions and their arguments can vary, we need a special tool called a disassembler to decode that binary file for us so that we can view the opcodes. Disassemblers do something else that’s cool, too, which is show you how logic flows through your program by decomposing it into blocks called “basic blocks” and drawing arrows between those blocks where there are jumps or conditionals that link them.

Some basic blocks from the “yolodex” CGC binary, as disassembled by BinaryNinja

My approach to discovering vulnerabilities relies on this block-level decomposition. I featurize binaries into basic blocks, enumerate the opcodes present in those blocks, and use that block-level data as one observation. I believe this may work because it is the series of instructions that a program executes that makes it vulnerable. Specific sequences, and their associated arguments, can leave a program vulnerable to buffer overflows or use-after-free or many other vulnerabilities. At the block-level, we have a fairly decent picture of which instructions are associated with one another in a sequence, and thus may be able to draw some conclusions about whether the program is vulnerable there.

The different kinds of vulnerabilities that can crop up in programs are well enumerated by the Common Weakness Enumeration (CWE). My approach attempts to classify which of these CWEs a given binary may manifest. Of course, to train a machine learning classifier to recognize vulnerabilities, I have to have some kind of training data set. I am using DARPA’s Cyber Grand Challenge set of Challenge Binaries as the basis for my training data. These binaries were written to contain vulnerabilities, and have readme files that describe the vulnerabilities in detail, including which CWEs they fall under. The first step on the road to creating my classifier is featurizing the binaries as I described above.

As I said, a regular text editor won’t do for reading a binary file, so I needed to choose a disassembler to break the challenge binaries out into their basic blocks. I chose to use Binary Ninja because it has a very easy-to-use Python API, and it’s hobbyist-level cheap (for comparison, the industry-standard disassembler is IDA Pro, which they will sell to you for roughly an arm, and continue to pick off your fingers and toes with renewal fees). I began by writing a quick script to go through a single binary and print out the opcodes it encountered in each block, just to validate that I was able to acquire the data I wanted.

The output of a Python script using Binary Ninja’s API to print basic blocks

Awesome. Step 1 complete. But none of this data is labeled. I want to label my basic blocks with the CWE that the binary contains. The CWEs are all described in a bunch of README.md files with no standard format. I briefly considered writing a script to pull the CWE labels out of the READMEs, but decided that the amount of time I would spend debugging edge cases and validating that the script actually grabbed the correct CWE numbers would be at least as much as just doing it by hand. So away I went, plugging CWE numbers into a spreadsheet to create a CSV mapping binaries to their CWEs.

Spreadsheet of CGC binaries labeled with their CWEs

This took me some hours, of which I mostly spent watching YouTube (bigclive’s teardowns are always a good way to spend some time). Rote tasks are good opportunities to learn a thing or two. Or to watch a show, pick your poison.

The first thing I noticed going through my labels is that some binaries are labeled with more than one CWE. This makes sense, but wasn’t something I had considered in my original approach. To simplify the problem, I made the (somewhat bad) decision to discard samples that are labeled with more than one CWE. There are better ways to handle this problem, but I only have three weekends to train, evaluate, and present my classifier, so I unfortunately do need to cut some corners. I calculated some basic statistics on my dataset to gain some insight into its composition.

Number of binaries:
108
Number of unique CWEs:
27

Distribution of binaries with respect to CWEs

Most of my samples are for CWEs 121 and 122. It’s important to note that this does not necessarily mean that my dataset is unrealistically biased. In fact, CWEs 121 and 122 are “Stack-based Buffer Overflow” and “Heap-based Buffer Overflow,” which are two very common vulnerabilities. That said, it’s worth being mindful of this skewing, as it will impact how our classifier trains.

Putting it all together, we now have a dataset where each row describes the instructions in one basic block, labeled with the CWE represented by the binary that the basic block was taken from. We’re ready to try to train a classifier.

Sample rows from the featurized data CSV


So remember how I said you could probably just hook your controller up to a bench power supply? Yeah, don’t do that. It’s a no-go. Apparently the Sega controller receives a clock signal from the Sega that drives some of the pin output. The controllers for the Sega Genesis don’t behave at all like the pinout I showed you yesterday. This is what a good trace of a stock Sega Genesis controller looks like:

Stock Sega Genesis controller logic trace, button order: up, down, left, right, a, b, c, start

Let me explain how I learned this.

This morning I woke up energized and ready to go (masala chai tea for the win!), so I tackled the issue of getting the Sega to actually output video to a TV. While I was twiddling the video cable I learned something exciting.

Connection flaw in the Sega’s video output

My Sega’s video output now comes in two pieces! Lovely. I futzed with this a bit until it fit snugly back into its housing. This resulted in a sort of okay video output to the TV. I could at least see that the game was booting up and enjoy visual feedback when I pressed controller buttons. I decided it was time to eliminate some variables and go back to the baseline — I hooked up the stock controller.

Stock Sega controller without pressing any buttons
Close-up of the Sega controller’s logic levels, showing an evenly clocked signal on 3 lines

Without pressing any buttons, we get three lines with an even clock signal. This was way different from what I saw yesterday messing with the asciiPad. It could be a difference in the controller, but it could also be a result of having a game loaded. I rebooted the Sega and took a trace of it at startup to see when the clocked signals appeared. If they appeared as soon as the controller got power, it would be likely that the controller was generating the clock and sending it to the Sega. But if there was a delay between when the lines went high and when they gained a clock, it was more likely that the clock was a result of the game being loaded, and therefore generated by the Sega and sent to the controller.

Sega controller pins during bootup of the Columns game

Sure enough, we see the latter case. For further sanity checking and information gathering, I replaced the OEM Sega controller with the asciiPad after bootup and took a trace while pressing buttons.

asciiPad logic trace with clock, order of button presses: up, down, left right, z, c, x, y, a, b, start, mode

We still see the clocked signals, which continues to support the idea that they’re coming from the Sega. But more than that, check out what happens when you press buttons! With the clock, we can actually see many of the buttons that were missing yesterday. You’ll note that the standard buttons exactly match the trace with the stock controller, but we get one additional button here that we didn’t have there — we have Z, which apparently drives all of the clock lines (and possibly just all of the lines in general) high. This is interesting because without a clock signal, other button presses like Start should not be possible.

X and Y are still missing from the trace. It doesn’t make sense to me that the AsciiWare folks would put non-functional buttons on the controller unless they used this controller interface for other systems too. In that case, it’d be a cost savings for them to only manufacture one controller and program the chip inside with whatever logic it needed to interface with the game system it’s marketed for. I could do some research to try and figure this out, but I had another theory I wanted to poke at first.

Recall the trace of the lines at bootup. There’s a little blip at the beginning that’s separated from the rest of the uniform clock signal. This could be the Sega telling the controller “hi, I’m here.” It could just be noise. But it’s definitely different from the rest of the trace. I zoomed in on it, and saw something weird.

Sega bootup trace showing a clock anomaly at startup

The clock signal after this “blip” has pairs of short pulses instead of the usual single pulse. I thought maybe it was an anomaly in the reading, but when I took another, I saw the same thing. The double pulses continue for quite a while, about 8 seconds, before they switch to the single pulse.

This got me thinking, do different games have different bootup behavior with respect to the clock? I’ve got a plethora of old Sega games, so I went to town plugging and unplugging them, taking traces of their behavior at bootup (only cursing occasionally as I blew into cartridges and enjoyed all of that old-timey dust-and-electronics smell). Here are a few traces just for your viewing pleasure.

Sonic 2 bootup trace
NFL 95 bootup trace
Lion King bootup trace
NHL Hockey bootup trace

Beyond each game having their own unique clock bootup shenanigans, they also clock different lines, and clock them inconsistently. Check out this zoomed-in view of the NFL 95 game. This is one clock “pulse” as it appears on several different lines.

Since NFL 95 drove clock signals over more lines, I decided to see if I could find my missing buttons in them. I didn’t, but I did find a few other interesting things. First, when running NFL 95 it seems (under most circumstances) pressing a button related to a clocked line doesn’t remove the clock signal from the line. This is different from other games, where pressing a button makes a clocked line steady at 5v or ground. The exception to this is when you press Start. In this case, we see a single pulse on pin 9, and then all of the lines are drive high for about a second, including pin 9. The duration of the silence on the lines is not dependent on how long you hold the start button.

Trace of all asciiPad button presses while running NFL 95
Pressing Start in NFL 95

It’s hard to tell whether the game is intentionally removing the clock from the lines in order to prevent button presses from registering (if the controller doesn’t have a clock signal, it can’t send most button signals, as we saw yesterday) or if it’s a side effect of some processing going on in the Sega. Either way, it seems that the way the Sega handles the controller coms is completely dependent on the game. This is cool because it means a game developers can ship their own unique equipment to enhance a game if they want to, without being tied to the template provided by Sega — but it’s also annoying for anyone who wants to manufacture a controller for the game, because it means there is no guarantee that it will actually work with all games.

Out of pure curiosity, I took apart a bunch of the cartridges.

They’re about what I expected. There’s a package that is probably just storage that the CPU on the Sega reads in order to get the game’s program. The only thing that surprised me was that the game name is printed right on the chip — that suggests a custom run of these chips just for that cartridge.

The RetroShock project has a ton of information about individual cartridges, including which chips are used in which games, and the pinouts used by the different cartridges. The two photos I took are of cartridges that use the same pinout, but there are several different schemes, with some using all of the pins available on the 64-pin header and some using only a few.

The asciiPad SG-6 aftermarket Sega Genesis controller

The asciiPad SG-6 is an aftermarket Sega Genesis controller with extra buttons and macro features. As a kid, I had no idea what any of the weird switches did. I just sort of flicked them up and down until the controller worked as I expected. As an adult, I started wondering how the controller is able to support more buttons than the stock controller, let alone the frame rate slowing capability I remembered as a child. It’s a mystery I couldn’t just leave alone. To the batcave!

I’m mostly just getting started as a hardware reverser, so my tools are nothing fancy. For this, I used a bunch of jumper wires, male and female DB-9 breakouts, and a super budget logic analyzer. All told, materials cost around $20. Here’s the setup.

The Sega connected to the asciiPad and a logic analyzer using breakouts and jumpers

I wanted to capture the signals coming from the controller when different buttons were pressed and different modes were set. The controller is powered over the serial connection, though it uses a non-standard pinout. Ground is pin 8, VCC at 5v is on pin 5. The other pins float high at 4v9. I checked all of this with a multimeter (mine is nothing fancy) before hooking anything up. Measure twice, cut once, as the saying goes.

The controller should be completely standalone, so if you have a bench power supply, you can just set it to 5v and hook the + rail to pin 5 and the – rail to pin 8. I didn’t do this. Instead I opted to use a female DB-9 breakout connected to a male DB-9 breakout connected to the controller in order to power the controller and provide access to the pin signals. I did this because it’s cheap and it works. My little logic analyzer is connected to the male breakout with some jumper wires so it can read the pin logic levels. I collected traces for pins 1–4, 6, 7, and 9 (everything but the ground and VCC pin).

From there, it was time to have some fun. I started with powering on the Sega and taking a trace to see if there were any bootup signals sent over the line by the controller or the Sega. There aren’t — the lines all just go high. Then I started taking traces with individual buttons. Which quickly became… odd.

For all of the pins, a logic level of 0 (the pin being high at 4v9) indicates that the button associated with that pin is not being pressed. A logic level of 1 (the pin being being driven to ground) indicates that the button is being pressed. The standard controller pinout generally associates one button with each pin, however there are a couple of cases where the pins are multiplexed and two buttons are capable of driving the pin. In the latter case, a special Select pin differentiates the two buttons by being either high or low. For example, the Start button and the C button are multiplexed on pin 9 — if the Select pin is low, and pin 9 is low, then this is interpreted by the Sega as the Start button being pressed. If the Select pin is high and pin 9 is low, it is interpreted as the C button being pressed.

When I started pressing buttons on the asciiPad, only some of them affected the logic level on the pins. Really important, standard buttons, like A and Start, didn’t seem to have any effect on the pin outputs. But, you know, the asciiPad has a bunch of toggle switches and, who knows, some of them might be futzing up the operation of the buttons. It also has this nice “mode” button which, hell if I know what it does. It seemed like the next logical step was to hook the Sega up to a TV and get some intuition about what the odd buttons on the controller actually do, instead of relying on my decades old memories of how it worked from playing video games with my brother.

While I kept all of the gear that came with the Sega, including the TV tuner, unfortunately the RF interface is and always has been super finicky. I wasn’t able to get it working in an amount of time that my patience would allow for. I did however take apart the Sega and attempt to diagnose any apparent connector issues (I didn’t find any). Here’s a few photos:

Sega Genesis board (note that this is not the “Mega Drive” described in other teardowns)
Reverse side of the Sega Genesis board

I went back to taking logic traces of the controller pins and, by some amount of trial and error, figured out what the “Auto/Turbo/Off” (ATO) switches on the controller do. First, visually it’s apparent that one switch is associated with each button on the controller other than the “Start” and “Mode” buttons. I switched each of ATO switches to off and captured the signals pictured below.

asciiPad pin trace with ATO switches set to “Off”

In order of activation, the buttons pressed during the trace were: Up, Down, Left, Right, Z, C, X, Y, A, B, Mode, Start. You’ll notice that we don’t have nearly enough pin activations in the trace to account for all of those buttons. What shows up on the trace are the D-pad presses and the B and C buttons, which correspond directly to the pinout described for the stock Sega controller. Clearly, something is up with the missing buttons. But we ignore that for now, and move on to setting all of the ATO switches to Turbo.

asciiPad pin trace with ATO switches set to “Turbo”

Turbo appears to quickly toggle a button as long as it is held down. This might be a great way to get in some quick punches in a fighting game, or float in a game with flying, or any number of other things.

asciiPad pin trace with ATO switches set to “Auto”

The “Auto” mode of the asciiPad behaves like Turbo except it’s active as long as the button isn’t pressed. Once a button is pressed, the signal for that button is driven high, effectively telling the Sega that the button is currently notbeing pressed.

The great mystery in all of these traces is why the X, Y, Z, A, Start, and Mode buttons don’t appear to affect the output on the pins at all. I could believe that Mode is perhaps a button that affects the controller operation and does not directly send anything out over the serial line. Similar arguments could be made for the other buttons, e.g. maybe they’re deactivated since a standard Sega controller only supports the A, B, and C buttons. But the A button is standard, and so is Start, and nowhere do we see either appearing in the output. Time to take the controller apart.

The asciiPad controller board

Inside the controller, the first thing that’s revealed is the chip that’s doing all of the signals processing for the controller. Beyond that, there really isn’t much to be seen — there’s some resistors and capacitors, but very few, and they’re probably just doing some stuff to the signals coming in from the buttons to the chip. Obviously there is also the header with the colorful wires coming out of it, which gets wrapped up into the black cord and becomes the serial connection to the Sega. The signals routed to that header come directly from the chip.

The backside of the asciiPad controller board, which interfaces the physical buttons

On the other side, we have the board components that interface with the buttons. For the pressure buttons, when they’re pressed they complete a circuit, which changes the signal that the chips on the other side of the board receives. For the toggle switches, they complete different circuits as they slide into one of three different positions. In the “Off” position, they complete no circuit; in the “Turbo” position they connect with the middle small grey piece, and in the “Auto” position they connect with the outside small grey piece. The exception to this is the “Fast/Slow” toggle, which only has two modes (this switch is located next to the Start and Mode buttons).

The backend mechanical bits of the asciiPad buttons and switches

When my patience returns, I’ll probably begin again by attempting to get the Sega hooked back up to a TV so I can functionally diagnose any issues with the controller. It’s possible that the buttons that don’t appear to send signals truly aren’t working. It’s an old piece of technology, and it was well used when it was new.

A couple of weeks ago, I embarked on a weird adventure with Sculpey and simple electronics. The goal had been to make something unique using the basic circuits skills I’ve picked up over the last eight months or so. Given that the final product literally just lights up a couple of LEDs when it’s plugged in, I’d like to disclaim that I didn’t exactly put myself through the paces here, but I did have fun. Here’s what I made.

SculpeyCat, powered by a 9V battery

It’s a bizarre cat, sort of Frankensteined together from a couple of basic sculpey shapes and steeped in all sorts of electronic components. Most of them were ripped out of an old bitchy router I had lying around and don’t do anything, but the LED eyes and the resistor ears are part of a bonafide circuit that light up when you plug in the tail. The black wires are both ground, while the white one is for your positive voltage source (I used a 9v battery but most anything should work just fine). It’s not super complex, but for the curious, here’s how I made it.

The biggest challenge with something like this is making the circuit such that it can be embedded inside the sculpey and still allow your sculpture to keep its shape. You have to be cognizant that you can’t squash the circuit when you roll your sculpey around it, so if you’re used to making sculpey creations by plucking off a measure of clay and rolling it around in your hand to make spheres or cylinders or dodecahedrons (hey, you never know), you’re going to need to adjust your technique. Most of the time, creating your basic shapes, cutting them in half, and stuffing the circuit inside will do just fine.

For the head, I carefully assembled and soldered the LED circuit like so:

Circuit diagram for SculpeyCat

Each of the ear-shaped resistors provides current limiting for the LEDs (else they’d burn up and be sad). It’s a pretty straightforward circuit. Voltage in, light out. Here’s what it looked like pre-sculpification.

SculpeyCat LED circuit

To get the sculpey around it, I rolled a sphere, cut it in half, and placed the flat side against the circuit (forming the back of the head). The front side took a bit more care in order to work the sculpey around the LEDs. I incrementally added more, smoothing it out and making it as round as possible. Once I was satisfied with it, I hooked it up to the battery and made sure it still worked. It’s important to check often that your circuit survived.

SculpeyCat head

With the head done, it was time to form the body. For some reason I decided that the tail should be detachable. This was a terrible decision, as it forced me to use some really bulky and difficult to maneuver jumper wires as the conduit through the body from the head to the tail. I rolled the body clay around them, making a sort of cylinder, and made the female jumper ends flush with the backend of the cat. The other end of the jumper wires is shown below protruding out of the cat’s chest. The positive and negative wires needed to be soldered to their corresponding wire in the cat’s head (though you can’t see it in the photos, I’d marked the head’s negative wire with a black sharpie, because it becomes real difficult to tell negative from positive once the circuit is covered up). This is where the heartache started.

Stripping the jumper wires and soldering them to the head wires while also not destroying the sculpture was a little bit fiddly. Worse though, there was no good way to bend the jumper wires to retain the sculpture’s shape. I ended up having to add more clay to cover up the weird bending and squishing I did. Lesson learned — just use wire wrapping wire for everything. It’s infinitely more bendable and versatile in this context. And when the heck am I really going to remove the tail from this beast, anyway?

SculpeyCat out of the oven, sans caps and other meltable goodies

Last point of note. Before putting it in the oven, I removed the decorative caps, header, and tail wires (because I could). The caps I scavenged are very clearly rated at 150C, which should be fine at the 275F (135C) I baked this at, but better safe than exploded. I super glued them on post-bake and re-added the other components as well.

The finished product? A lovable terror that now sits on my desk.

SculpeyCat, wires all tucked away inside
Some lovely cat anus

Incoming influx of tech posts! I’d been trying out blogging on Medium (again) because the WordPress editor was crap, but… it’s seen some recent improvements. So I’m copying my content over. I might maintain both blogs, since nobody really knows my website is here. Part of me wants to try to build an online presence, and then the other more practical part of me realizes that would be exhausting. So we’ll see. At the very least, I’ll keep my website updated. Writing posts helps me think through the work I’m doing and also look back and remember that I did stuff and learned stuff. That can be kind of gratifying some days when it seems like there’s always so much more to know and learn.

Anyway, the point is – incoming content! Ahoy!

I’ve been taking things apart now for a while, just to get a glance at their insides and a feel for what different electronics look like. Someday I might do a monster post of all the teardowns, but I figured I can at least start posting the individual items I rip apart.

Starting with this automatic trash can.

It’s got a sensor that opens it if you wave your hand over the top of the can, and some LEDs that light up when it does. It’s powered by 6 AA batteries. It’s got a switch that turns the sensor on and off, and a switch that turns the power on and off. That’s about it for user-facing components. Now let’s take a look at the inside.

The two halves of the case are secured by a bunch of Philips head screws. One is hidden inside the battery compartment (a great way to make sure you power it off before you open it). All told there are 8 of them, all identical screws so no need to worry about remembering where they go back. That’s always nice. Once they’re out, the case comes apart easily.

There’s not a lot to see inside. The positive rail from the batteries is connected directly to the toggle switch that turns the power on and off. From there, power is routed to the control board, which then supplies power to the motor. One operational detail of note – the motor is only used to open and hold the lid, and gravity is used to close it. So the control board probably only has to drive the motor in one direction.

The control board is covered in conformal coating – a nice touch, considering the case isn’t well sealed, things might be spilled on the lid, and the trash can itself is going to be a humid, icky environment. The coating should preserve the board longer against the kind of wear that the “elements” will expose it to.

The board is screwed down in somewhat of a cost-and-time-savings fashion. There are two holes for screws that have simply gone without. Two other places that could have been designed for screws instead have plastic rods from the case that guide the placement of the board. The screws on either end of the board have washers around them, to distribute their force, as they are meant to help pin the board in place against the two springs on the other side. I have no idea what the springs are for.

The bottom of the board isn’t too interesting – it has the LEDs that make up the display (when the lid is open, it has a kind of count down to when it’s going to close again) and it has the sensor. It looks like a little IR sensor to me, but then my experience with sensors is terribly limited. I could very well be wrong.

What continued to confuse me was this little board in this tiny slot that has three wires routed to it. So I pulled it out. It’s nicely labeled, with IO, VCC, and GND. It looks to me like a test board. If I were reversing this, first thing I would do is pull the plug on that and see what happened. It’s possible that the little chip on that board is sending IO somehow, but I can’t think of any reason for it to do so unless it’s for debugging (the print is tiny and my eyesight isn’t great, but it looks like a 487B2 519E4, which I was unable to find in the Googles).

Back to the main board. There are a number of transistors, both surface mount and through-hole. At least one of these turns the motor on and off. I haven’t quite worked out how the circuitry supports the behavior of the motor.

The big chip is clearly the main processor. There’s a voltage regulator, some caps, and some resistors that are probably doing some kind of voltage stepping for that chip. The other interesting parts on this board (the 8 pin chips) are an LM358 by TI (on the right) and a MX612 (on the left).

The LM328 is an amplifier. I guessed based on positioning that this has something to do with the LEDs. I’m not sure why they’d need an amp though. Could be it’s amping the voltage to drive the motor, but why locate it so far away from power, and the output to the motor?

The MX612 is a DC motor drive circuit. I don’t think it’s any mystery what that’s doing. The datasheet is conveniently in what looks like Chinese, but it does at least provide the pinout, and some graphs.

That’s all I have. Still a lot of unanswered questions, despite it being a relatively simple device.

It’s a snowy afternoon in the Adirondacks, and while everyone else is playing Scattergories, I’m playing with IDA.

I’m weird, I know.

I started an adventure in disassembling and reverse engineering my Roku 3 some months ago. It began with me taking the thing apart, snapping some photos, taking some pcaps, and poking at it until I shorted something (oops). The semi-destroyed unit provided a great opportunity for further disassembly, including removing the flash part on the reverse side of the board. With some help from my awesome coworkers and some luck, I was able to recover the contents of the flash, which include a couple of firmware images for the Roku.

A photo of my Roku 3. A complete teardown of the Roku 3 4200X can be found here.

Binwalk was able to fairly painlessly extract two elf images from the flash. Loading them up into IDA revealed that they are VideoCore III ELF files. IDA isn’t really sure what to do with these, besides parsing out the segments and finding some strings. So I spent a day googling and learning about the architecture.

It turns out a more modern version of a Broadcom VideoCore is used on the raspberry pi 3 as the GPU. So the community has done some pretty awesome RE work on this architecture already. Sure, it’s VideoCore IV, but it’s pretty extensive and so far has appeared fairly applicable. Of particular note is the idaplugin, which is a disassembler for the architecture. I dropped vciv.py into my ida/proc directory, and from there, I have something that looks like passable disassembly. Woohoo!

Comparing the two ELF images side-by-side, they look awfully similar. The strings window shows a precisely identical strings list. The entry points have the same code. So for sanity sake, I went ahead and compared the two files with a raw hex diff tool. They’re quite different. The first 5% or so of the files are nearly identical, except for a few bits here and there that could be attributed to read errors. But the other 95% is completely different. I scanned through the diff and found something interesting – a string that seems to indicate a firmware version. My current working theory is that the two images are the same firmware, but different versions. Likely the older version is a fallback version in case of corruption of the newer image.

Firmware version numbers of the two ELF files extracted from flash

While the Roku has wifi, ethernet, and USB, the only obvious mentions (via strings) in the ELF files are for video and audio processing. I’m most interested in the networking stuff at the moment, so I wanted to figure out whether that was probably buried in these images somewhere in a non-obvious fashion, or if it was in a library someplace else in the flash. I took another look at the binwalk output and didn’t quite answer my question, but found something else interesting – a mention of ThreadX. Now I know what it’s running.

Binwalk output mentioning ThreadX, the RTOS most likely running on the Roku 3

I didn’t find any immediate answer to my question about the networking comms, but there are some other files I’d like to extract from the blob, including the u-boot image, a MySQL database, and a couple of GIFs. Given what I’ve read, and because the binwalk output calls the u-boot image an Android bootimg, I suspect at least some of the stuff on the roku is related to the graphics drivers released by Broadcom for the BCM 21553.

 

The other day a coworker sent me a fascinating article about solar panels and the electric grid. The next day, I woke up wondering how much solar it would take to power some simple electronics, like outdoor cameras for surveillance. And then I realized I had some solar panels literally in my backyard. These guys:

They’re your garden variety (ha) solar walkway lights. They’re about four years old, and my complex’s landscaper has done an excellent job destroying them over the years. What they do is they take in light during the day, and then they light up at night. This means they must have some sort of battery, light sensor, and LED, along with the solar panel. Not bad for ~$2. I decided to take one apart and find out what it was made of.

The screws were a bit weathered so getting them out took a bit of finagling, but with just the first one I found the battery. Unsurprisingly it’s rechargeable. It provides the load to the solar panel, which is what makes the circuit work at all.

The glass ball was just sealed onto the plastic and came off with a bit of wiggling, likely made easier because of the age of the thing. Removing three screws on the outside of the base and one screw on the inside released the shaft from the rounded part of the base, making it easier to remove the tiny circuit board from the fixture. All told, I ended up with this:

A solar panel, a resistor, an LED, and a battery, and a circuit to connect them all. Immediately, I noticed something was missing – where was that light sensor I assumed existed? Then I realized that the solar panel literally collects light, so a circuit would be able to tell if it was light outside or not based on whether there was power flowing out of the solar panel.

I reassembled the thing to see what sort of voltage was coming out of the solar panel, and to see what voltage was powering the LED. In lieu of a nice enclosure, my cute duct-tape job held the contacts to the battery so that the circuit would be complete.

First thing I noticed was that the LED was getting about +1V regardless of whether it was active or not. That seemed strange. But the same coworker who sent me the article on solar panels happens to be amenable to stupid questions, and he explained that the resistor is current-limiting the LED. Things started to make a little more sense. He also sent me this video to watch, and things started to make a lot more sense.

That black blob on the back of the board probably has some extra components underneath it. I’m not sure what I’m going to do with the solar panel yet, but I might try scraping that blob off somehow and recreating the circuit on a breadboard so I can play with it a little bit better and see what else I can power using the solar panel. I may also take apart the other solar light I have and see what kind of trouble I can get into trying to use more than one panel to power a circuit.

Okay, fair. So you don’t want to answer a question that someone could answer for themselves if they weren’t a lazy bag. But when RTFM is the response to a novice not reading the man pages, I get a little upset.

Man pages are archaic, difficult-to-search blocks of text. And yes, they’re better than nothing (especially if you don’t have access to the internet!), so I’m not calling them obsolete, but if other references are available, the last place you should direct a newbie is the man page. Their eyes are going to glaze over and you’re going to discourage them from trying to learn anything new. And often times, the answer to a simple question doesn’t require the depth of explanation that a man page provides. Sometimes, all the user really needs to know is what order the arguments should be passed in, or where the file name goes. Useful one-liners shouldn’t be discouraged as answers. As the user grows (or not) as a technology professional, they’ll learn more about these tools out of necessity.

We were all novices once. We all had to rely on the advice of experts and teachers to help us get to where we are today. If we’re going to support the growth of our industry and usher in young, new talent, we need to foster a supportive environment – one where newbies aren’t turned away to the man pages as a rite of passage.

The first load of hardware is in! The Raspberry Pi starter kit: complete with Pi B+, case, HDMI cable, AC adapter, micro SD card (8GB) and heat sinks. I assembled this tonight (no pics because I’m lazy).

IMG_20140919_153946

The Arduino Uno starter kit: And Uno Rev3 (I think?), small breadboard, USB cable, D battery adapter, and a bunch of jumper wires.

IMG_20140919_153953

And two wireless transceivers. The transceivers are compatible with the uno, but I have an Arduino mini on the way that I’m hoping it will be compatible with. The minis are far cheaper than the unos… About $5, vs $15. Plus the minis are, well, small.

IMG_20140919_154034

I’ll start playing with the Pi tomorrow and let you know how the projects go!

Tip #8: Goto statements are NEVER necessary!

This is C/C++ specific. I wish programmers would forget that goto statements existed. The only time you should ever worry about a goto is if you find it in someone else’s code and you have to fix it. Because a goto is a problem. They create control flow panic. There is nothing you can do with a goto that you can’t also do with more readable, comprehensive code structures, so use those.

 

Tip #9: Meaningful variable names help you and your collaborators

It may be tempting to write code with variable names like c, s, x, y, i… but don’t do it. This creates confusion. For example:

void iter_string(string s)
{
    int x = 0, y = 0;
    char c, b = 'y', a = 'n';

    for(int i = 0; i < s.len(); ++i)
    {
        c = s[i];

        if(c == b)
        {
            x = 1;
        }
        else if(c == a)
        {
            y = 1;
        }
        else
        {
            x = 1;
            y = 1;
        }
    }
}

It might make your code “shorter,” but if you’re losing track of what your variables refer to while you’re writing your code, chances are nobody else is going to be able to figure out what they mean in the first place.

 

Tip #10: Always #define (or equivalent) your magic numbers

“Magic numbers” are numbers that are used in an expression without having an obvious purpose. For example,

void purchase(int num)
{
    return (cents / 100) - (5 * num);
}

What is the multiplier 100 for? Maybe it seems implicitly obvious because of the variable naming, but what about the – 5? Wouldn’t it be easier if it were written this way:

void purchase(int num)
{
    return (cents / CENTS_IN_DOLLAR) - (PRICE_OF_OBJECT * num);
}

I certainly think so. So whenever possible, give your values names.

 

Tip #11: Use scope brackets for all of your ifs and whiles

One-liners like this don’t require you to use scope brackets in C/C++:

if(somecondition)
    dosomething();

but when you go to add another action to this if, you’re likely to do this on accident:

if(somecondition)
    dosomething();
    dosomethingelse();

which won’t actually do what you want (dosomething() will be executed if somecondition is true, but dosomethingelse() will be executed regardless). If you get in the habit of adding the scoping brackets, you can avoid odd errors like this.

It’s amazing that, while many programmers are able to create amazingly complex systems that solve amazing problems, many of us are never educated in the “proper” ways to write code. This isn’t an issue of style, guys – it’s an issue of maintainability, readability, and efficiency. We’ve all had that moment where we open somebody else’s code and we’re hit in the face by a monolith, a goto statement, or something equally offensive. I like to think this isn’t intentional negligence, however, and so I am to begin cataloguing my pro tips for developers as I encounter blunders in my own code and the code of others.

Tip #1: Comments like “Variable Declarations” and “Initialize Variables” are not helpful.

We’re programmers, so we know what’s going on. Comments like “Set up GUI elements” and “Declare variables to be used in GUI labels” are much more helpful, because they describe the use of the variables being initialized.

Tip #2: If you have to make a comment to delineate a section of code…

It’s a good indicator that that section of code belongs in its own function, file, class, or module.

Tip #3: If you find yourself repeating the same structure of code over and over again…

Think about how you could turn that structure into a function. Removing duplicated code is a great way to reduce complexity and save headaches when that duplicated code has to be edited. You don’t want to have to edit it seven times, one for each time it was duplicated.

Tip #4 The overall goal of any function is to be super readable.

If at all possible, we want it to look like this:

  MyFunction()
{
	DoThing1();
	DoThing2();
	DoThing3(using, these, things);
}

It’s not always possible to reach this ideal, but there’s a lot you can do to get it close. For starters, you can break all code that is part of one cohesive idea out into its own function. Don’t worry if this function is only called once – the overhead for pushing and popping the stack is worth it for the added readability.

Tip #5: It is OK to assign a boolean expression to a boolean variable.

It is far better to do this:

b = x && y;

than to do this:

if(x && y)
{
    b = true;
}

which unnecessarily clutters the code, AND it makes it a lot less efficient. It’s actually a good exercise to turn off compiler optimization and look at the difference in the assembly code between those two implementations.

Tip #6: Use positive logic where possible.

It is usually more clear to say “if this” rather than “if not this.”

Tip #7: Name booleans using positive logic

e.g. rather than naming a boolean “doorState” name it “doorOpen”. This is much better, because you never have to remember what true or false means in the context of the boolean.

Really awesome, simple timeline creator (you can see one featured on my Recruiters page).

http://timeline.knightlab.com/

Hello folks,

I recently completed my semi-annual OS re-install on the laptop. This year’s flavors are a dual boot of Windows 7 (still–no Windows 8 until I can take advantage of the touch options), and Ubuntu 12.04 LTS. Now Windows I’m an old pro at customizing, and honestly I don’t do much anymore besides download my favorite apps, but Ubuntu I find I always have a little fun with when it’s first installed.

Today I just want to share with you the first things that I did after installing Ubuntu

First, download Chrome. I don’t want to start a Firefox vs. Chrome debate, both are great, but I’m a big patron of Google services, so Chrome it is for me.

Second, install Gnome 3. Primarily this decision was based off of wanting to use this theme, but also it looked like something fun to play with… and here I am a couple hours later, still having a lot of fun. Immediately after the transition from Unity to Gnome, I found myself thinking Gnome was rather nasty looking. It’s not bad, I guess, but it’s pretty boring and the little line accent thing underneath “Activities” in the upper left hand corner of the screen is kind of ugly. But I figured, no big deal, since I’m going to be replacing it with that awesome theme I found.

There are instructions on how to install the theme underneath the image of the theme itself, from the link above, so I won’t go into that. A side effect of installing the theme, though, was that I learned about Gnome extensions… every shell should have this. Seriously. I love how easy it is to customize things in Gnome. I have been shopping for extensions at https://extensions.gnome.org/, and so far I have found a couple that I really like. One is this neat little resource monitor that fits in your status bar. The other is this nice little menu augmentor that puts your restart and shutdown options on your menu as well as suspend. These extensions make Gnome great, but in and of themselves, they’re not what really makes it shine.

Let’s face it, every shell has its own idea of how users should work. And some probably work better for others. But with Gnome, I feel like it was made for me. Run your mouse over to the top left of the screen–all your windows assemble themselves on your desktop. Pick one, and you switch to that. Want to start a new application? You favorites just popped up on the left, or you can browse all your installed applications by just switching tabs. Gnome is simplicity, and I might even venture to say it is touch-interface inclined, because of its simplicity and ease of use. It would be really easy to see something like this on a tablet. Of course admittedly, I haven’t dabbled in all the shells that are out there. But this one has really impressed me so far. So I don’t mind giving it some high praise.

Well, that’s it for me. Soon I should be back with some fun stuff about installing the Android SDK, and customizing Ubuntu 12.04.

An excellent introduction to C function pointers: http://www.cprogramming.com/tutorial/function-pointers.html

RSpec Core 2.6 – RSpec – Relish Credit goes to lifehacker for this useful tidbit. Anyone who’s used linux for more than a short while will notice that one task you find yourself doing quite often is making a directory, then changing to that directory immediately after. Normally this is accomplished by the pair of commands

mkdir dir
cd dir

After a while this becomes tiresome though, and you start to wonder, isn’t there an easier way…? Maybe a command that does this simple task for us in one shot? The answer is no, there isn’t, but as with anything in linux, it can be made.

1. Open up your shell configuration file, .bashrc or .profile if you’re using bash, located in your home directory. Note, if you want to have this command available to your sudo account, change it in the root directory, but be warned–changing the shell configuration file for root can be potentially hazardous as a security risk or worse a PC-crippling problem if you unintentionally botch the edits.

2. Add the following lines to your file

mkcd () {
mkdir -p "$*"
cd "$*"
}

3. Try it out

~$ mkcd dir
~/dir$

Original post here: http://lifehacker.com/5535495/create-and-change-to-a-new-directory-in-one-command

RSpec Core 2.6 – RSpec – Relish Here’s a useful little tidbit for anyone dual-booting linux and some other OS. If you don’t want linux to boot up by default, you can edit /boot/grub/menu.lst. To change the default OS to boot into, edit the line that says

default 0

to read

default (entry number of your preferred OS)

where the entry number of your preferred OS is the line that your OS appears on in the Grub boot menu, minus one, because the numbering of entries starts at 0.

It is always good practice to save system files before editing them. With this in mind, you should save a copy of your menu.lst before editing so that you can restore it to a known working state in case you break it. You can do this with a simple

cp menu.lst menu_copy.lst

from the command line


RSpec Core 2.6 – RSpec – Relish

Recently, I’ve learned quite a lot about Ruby and Rake. I plan on sharing this knowledge on this blog when I have some time, at which point I’ll edit this post with something useful… but until then I’ll be playing with this page and formatting it until I get something I like.

#some example Ruby code

array.each do |elem|
elem = "EMPTY STRING (or am I?)"
end