23 February 2009

The One-based API

If you try to do anything fancy with CListCtrl, sooner or later you will find the allure of custom draw list controls too great to resist. When that happens, you will find yourself drawing each item - and each of its subitems - one at a time, in order to achieve the stunning visual UI that your app deserves.

And sooner or later, you will stumble across CListCtrl::GetSubItemRect(), in your quest to add special fonts, colors, and other effects. This is not an API you want to tackle when you are sleep-deprived or otherwise under the weather. Unlike every other CListCtrl API, GetSubItemRect() is one-based. What does this mean? It means that for subitems 1...N, it works fine. For subitem 0, what you get is the rect for the entire row, not just the first column. It might take you a while to reach this understanding, since bizarre visual funnies are normal when you are trying to pimp a UI. Finally you narrow it down, and discover the dark side of CListCtrl::GetSubItemRect().

When I first discovered this, I remember thinking, Why on earth did they do it that way? Couldn't they just do another API - something like GetTheWholeDamnRowRect()? Well, OK, I actually thought some other things, too, but let's get back to GetSubItemRect(). The obvious thing to try is to use a little arithmetic on the adjoining subitems, but there are several gotchas with this approach; for starters, what if subitem #2 (that's "2" using one-based nomenclature) has a width of zero (that's "0" using... well, you get it). A nice, simple solution suddenly becomes a bunch of nested if's marching across the screen. (I know, I've tried it, don't go there).

But where else can we get the dimensions of the first subitem? In this case, the header control is our friend. We use CListCtrl::GetSubItemRect() to get the top and bottom borders of the rect, and then use CHeaderCtrl::GetItemRect() (which thankfully is zero-based!) to get the left and right borders.

The code in a derived class looks like:
BOOL CMyListCtrl::GetSubItemRect(int nItem,
int nSubItem,
int nArea,
CRect& rect)
{
// get rect from CListCtrl (top and bottom borders)
BOOL ok = CListCtrl::GetSubItemRect(nItem, nSubItem, nArea, rect);

// if nSubItem == 0, the rect returned by CListCtrl::GetSubItemRect
// is the entire row, so we use left and right values from header

if (ok)
{
CRect rectHeaderItem;
VERIFY(m_HeaderCtrl.GetItemRect(nSubItem, &rectHeaderItem));
rect.left = rectHeaderItem.left;
rect.right = rectHeaderItem.right;
}

return ok;
}

Whatever Happened To The 4 Laws?

A new Navy report warns that war robots could go Terminator: "There is a common misconception that robots will do only what we have programmed them to do. Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person."

Because of the sheer size of the code - typically measured in MLOCs - and many programmers working on a single robot project, it is easy to see how this could happen. I just wonder if the report isn't being too sanguine. I mean, with some cars today having nearly 100 MLOCs (see The End of Programming), isn't it likely that a war robot would need a lot more lines of code?

It seems to me that now is the time to standardize some of this stuff. For example, I can see how the military will get the most bang for the buck by off-shoring code development for the vision package, the ambulatory package, the communications package, and other types of utility packages. Maybe the armaments package needs to be done in the U.S., for security reasons.

And what about the OS? Hopefully, the M-2000 robots will be able to use the same OS as the M-1000 robots - at least without a complete rewrite. Can you imagine how long it would take a business to introduce enterprise software, if each project started with design and development of a completely new OS?

With software packages, and a standardized OS, anyone could build their own killer robots. Oh, wait...

22 February 2009

8 Out of 50 Ain't Bad

I have no idea how many times I have been happily coding along, only to find that what I thought was a universal law of data conformity was invalid, and meant that I needed to write a lot of special-case code to handle incompatibilities, strange formats, or just simply mistakes.

Today I was finishing up another article for CodeProject when I ran into another one of these. I was trying to pop open the web site for a state, using the state name to form a URL. After doing the first six with no problems, I was lulled into believing all state web site admins were doing something sensible - namely, forming the web site URL based on the state name, like http://www.california.gov. Silly me! Then I got to Connecticut, which uses http://www.ct.gov. I suppose there's some justification. I mean, who wants to type in a long state name? And who can remember if the t's in Connecticut are doubled, like in Massachusetts? And then there was New York (http://www.state.ny.us). This one definitely has a "committee" feel to it. Or maybe a "consultant" feel. Anyway, it's just not normal.

In all I found 8 state web sites that required special handling - i.e., which did not conform to the ideal URL of http://www.state-name.gov. Now I'm curious about what happened with those 8 states. My first thought was, maybe the ideal web site URL was squatted on by a travel agency or something. But when I typed in the ideal URL for those states, most of them displayed Unable to connect messages. So I really don't know why the New York web site admin didn't just redirect from http://www.newyork.gov to http://www.state.ny.us. Across the river in New Jersey, that's exactly what the admin did - you can enter either http://www.newjersey.gov or http://www.nj.gov.

It seems to me that in this case, simpler is better, because it's the state's residents who are most likely trying to access the state web site. If you ask google, they can probably tell you exactly which 8 states have non-ideal URLs, since those states are the ones that everybody has to search for. I have to admit, though, that 16% non-ideal URLs is not bad at all.

19 February 2009

The End of Programming

Sometimes when I'm not coding or reading - which together account for most of my waking moments - I begin thinking about the strange profession I have. Strange in how it began, and I suspect strange in how it will end. I have not been able to overcome disquieting thoughts after reading Vernor Vinge's prediction of The Singularity. His most famous quote is the one that appeared in Omni magazine in 1993: Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended.

Some have expressed their belief that by 2030, in some lab somewhere, a researcher will turn on the world's first AI, and that will be that. Of course, when that happens, programming as a profession ceases to exist. So - assuming we escape the Mayan apocalypse - shortly after 2030 there will be no more buggy IDEs, no more non-standard compilers, no more language wars, and probably no more CS degrees. I mean, what would be the point? Obviously AIs could program computers better than humans.

This means that there are about two decades to go. So far, there seems to be plenty of work for programmers. IEEE Spectrum reports that The avionics system in the F-22 Raptor, the current U.S. Air Force frontline jet fighter, consists of about 1.7 million lines of software code. The F-35 Joint Strike Fighter, scheduled to become operational in 2010, will require about 5.7 million lines of code to operate its onboard systems. And Boeing’s new 787 Dreamliner, scheduled to be delivered to customers in 2010, requires about 6.5 million lines of software code to operate its avionics and onboard support systems.

Talk of 1, 2, or even 6 MLOCs excites even the most jaded programmer. But these are all puny efforts. The real gorilla in the room? Cars. Yep, if you bought a premium-class automobile recently, “it probably contains close to 100 million lines of software code,” says Manfred Broy, a professor of informatics at Technical University, Munich, and a leading expert on software in cars. The current S-class Mercedes-Benz requires over 20 million lines of code alone, and the car contains nearly as many electronic control units (ECUs) as the new Airbus A380.

I wonder what will be the first car with BLOCs under the hood?

18 February 2009

Lions and Tigers, Oh My!

On many programming sites such as CodeProject there are articles posted on root kits, viruses, windows message spying, and keylogging. The reaction whenever these sorts of articles appear usually begins with site members asking "Do we really want this kind of article here?" and ends with ... nothing. Nothing in the sense that nothing more is said, and nothing is done. My reactions follow the same lines - wondering if the article will incite otherwise kindly programmers to flood the internet with every kind of abhorrent malware, wondering if the mere posting of the article will brand the site as a programming cesspool, its members stained forever.

I usually realize fairly quickly that the suspect article is only a shadow of the kinds of information available on some web sites, about how to actually write and distribute malware. I have never seen any of these articles talk about techniques that weren't discussed in great detail elsewhere. The article author's intentions also need to be considered - usually it's something like, "I got this virus, and I wanted to know how it infected my PC, so I dug into it a little and this is what I found".

I find this perfectly normal behavior, since I do it myself all the time. Programmers are curious, and want to share what they discover with other programmers. And there are many things to learn from studying and understanding malware:
  • How to detect the presence of malware on a PC
  • How to remove malware
  • System internals that may be useful in writing other software
  • Coding techniques - malware probably uses the most minimalist and efficient algorithms of any software, including game programming
So do I have a problem with these types of articles being posted? Usually, no. The only problem I would have is if the article obviously advocated creating and distributing malware (for whatever reason). Otherwise, I enjoy reading about the clever programming techniques that malware uses, and thinking about how to use these techniques myself.

Do you know what an LSO is?

[I originally posted this on CodeProject, but I'm re-posting here to make it easier to find.]

I was looking for more info on a Flash security alert, and came across an explanation of what LSOs are. Just incredible. I can't believe they think it's ok to store this crap on my computer.

It turns out that Adobe's Flash Player maintains its own cookies called Local Shared Objects. They are not cookies, so your browser has no control over them. While cookies are limited to 4KB of text, LSOs can be as large as 100KB. Cookies are controlled by your browser, but LSOs are controlled by the Flash player, using obscure, hidden settings.

LSOs can be set and read by web pages, even if you can't see a Flash animation on the page. If you look, you will find sites devoted to explaining how to use LSOs to track user movements online, and store small databases on the user's computer, to eliminate the need for making a round-trip back to the web server. I have even seen a posting from a user complaining that his bank was using LSOs to store his personal information, even though there was no Flash animation on the bank's site.

Here's how to stop this nonsense: by default, Flash accepts all third-party LSOs. You have to go to Adobe's Flash Player Settings Manager site.

On the left you will see a Table of Contents. Under that, click on Website Privacy Settings Panel. What you will see displayed is the actual management console to manage the settings on your computer. If you don't recognize a site in the list, that's not surprising - you were never asked for permission to store this crap on your computer. What I did was simply click on Delete all sites. You may want to be more selective.

OK, now click on Global Storage Settings Panel in the Table of Contents. Again, you're looking at the actual Settings Manager. Now uncheck the box that says "Allow third-party Flash content to store data on your computer".

This should take care of LSOs.

17 February 2009

Google's My Bitch

Recently on CodeProject I suggested that each of the site's programming forums should have a large button at the top that said Google's My Bitch. It's really surprising that IT people should have to be reminded of google, but the programming forums on CodeProject (and other programming sites) are being flooded with questions that could be answered very quickly with a simple google search.

Google itself might have contributed to this state of affairs. For most of last year, Google Groups search was broken, returning results that were completely irrelevant, or from a different newsgroup than the one that was actually specified. I've tried several Google Groups searches in the past week, and so far it seems to be working correctly. I haven't seen anything about when (or how) it was fixed, but hopefully people will come back to using it and other online searches, instead of asking for real-time answers.


Copyright (c) 2009 by Hans Dietrich