Data Portability in Games

May 9th, 2008

These last days I’ve seen quite a few articles on the ‘net mentioning MySpace’s recent “Data Availability” program which will make them share data with, among others, Yahoo and Twitter. This revelation is often mentioned alongside DataPortability, a framework for combining information from different social networking sites. I’m all for solutions like this; I love the thought of having data being accessible from everywhere (as long as it’s in a controlled manner). Not that I see much point in it, myself – I can easily add what’s needed manually to the few social networking sites I’m active on. But I love the idea.

One problem is that this is yet another attempt at creating an open standard that can be used by anyone and everyone. I’m sure that DataPortability thinks that it’s special and unique and brings something new into the disarrayed online world…but I’m also sure that OpenSocial feels the same way, just as FriendFeed and who knows how many others. Looking at DataPortability’s FAQ page it seems that they are aware of the problem of constantly re-inventing new standards; they want to use existing standards effectively instead. …But in a controlled manner. According to their recommendations. …Which sounds like they are trying to impose a standard, after all. One good thing about DataPortability is the fact that they won’t try to make a centralized storage point for all data – unlike FriendFeed, which sounds like utter bollocks.

Anyway, good luck to all of them, and I’m not stupid enough to look a gift horse in the mouth: if DataPortability (or some other standard) becomes a wide-spread way of sharing data I’ll definitely look into how I could use it in upcoming projects. For example…Spandex Force 2. I could imagine some cool uses such as importing personal information into the game, accessing photos that can be converted into an in-game avatar pic, or sharing pictures of impressive victories. Amongst other things.

I suspect that this could even be used for cross-game character data. It’s the old utopian dream that fanboys have yearned about for years and years: imagine that you’re playing an RPG and that you’re pretty fond of Mr. Fagball (as your character might be called). Then you want to play another RPG – or even a game of a completely different genre – and you could now have the option of using Mr. Fagball in that game as well! Yayness! Of course, it would probably work like utter crap if it was implemented badly, but I could imagine that static character traits could be shared even though game-specific data isn’t.

For example, if Mr. Fagball is a character in an RPG his STR stat might be at 16. Even if this would be possible to translate into strength in a strategy game, it might be completely ludicrous – the strategy game could become totally broken. However, if the RPG game stored information about Mr. Fagball’s pot-bellied appearance, that could (possibly) be of use in the strategy game as well. Like a cross-platform Mii. Except that this sharing wouldn’t have to stop at mere appearance; if data was gathered through social sites as well, personal information could be utilized by the game in order to make an uncannily scary experience.

“Give it up, Moop-Gleez! Your evil plans are brought to an end!”
“So, Mr. Fagball… You have come to destroy me? I think not – I know your weakness! You made out with Patrick’s sister last weekend, and if you don’t throw down your sword right now I’ll e-mail him and tell!”
“NOOOOO!”



Linux Boot in One Second – Musings About BIOS Evolution

March 12th, 2008

The title of this post should be taken with a truckload of salt: as far as I know there’s no way to boot a complete Linux distribution in one second. But it seems that General Software has developed a BIOS that can boot to LILO in one second – and that’s not bad at all! Basically, all the unnecessary things are skipped, such as waiting for the video card to load its firmware, detect devices, and so on. This enables the BIOS to boot incredibly quickly.

But heey… Wait a minute. If the hard disk isn’t detected yet, how can the bootloader start the OS?

Being both lazy and philosophically-minded I’ll try to speculate about the answer to that. My guess would be that all data that would be detected by a normal BIOS has to be hard-coded one way or another; the BIOS data has to be manually entered, or possibly the BIOS has a “detect everything now and then save the config” mode that’s not always run. That’s actually pretty neat…for embedded systems. It sounds completely useless for desktop environments where you change keyboards and mice and video cards and electrical pets (what?!) every now and then. But since the article mentions the medical device market, I’d wager that this isn’t a big deal for the target devices.

In fact, I wonder if this kind of optimized BIOS couldn’t be of value to a other embedded devices like home gateways and such as well.

But hey again, wait another minute or two. The slow boot times of home gateways is probably due to slow flash reading times rather than a second or two on the BIOS level. In fact, come to think of it, I don’t know if I’ve ever seen a BIOS on these devices. So, it seems like this optimized BIOS is only relevant to embedded devices of specific architectures - of which maybe the aforementioned medical devices belong. But what about non-embedded devices? Wouldn’t this improved BIOS be useful for desktop computers too?

Short answer: meh.

Longer answer: don’t think so.

Full answer: in all probability, optimizing the BIOS would result in minimal boot time improvements. Detecting new devices and loading the video firmware doesn’t take all that long – the biggest culprit is loading the actual OS itself.

But, and this is a big but, the article also mentions how they increase the Vista loading time from 74 seconds to 24 by adding a UDMA-capable driver to the BIOS. I don’t really see how they can group these two BIOS improvements into the same article – they seem to deal with wildly different things! Stripping the BIOS of unnecessary checks is quite different to adding a driver that increases efficiency a lot; the former isn’t very useful in most cases, but the second is incredibly useful!

And that got me thinking… The BIOS is often the bottleneck; wouldn’t it stand to reason to evolve a BIOS as much as possible to make the OS faster? If adding a UDMA driver speeds up the boot process by 300%, isn’t that the way to go for other bottlenecks as well – add improvements at the lowest level?

Oh wait. What this would end up as sounds suspiciously like a microkernel. I wonder if there’ll ever be a hybrid BIOS/microkernel.



Locked in and Shut out – Windows and Mac and Linux

October 23rd, 2007

In a PC World article today it’s mentioned that Vista still has major incompatibility problems nine months after release. Brothers’ multi-function printers don’t work correctly, it’s not possible to fax over the Internet, Photoshop CS2 won’t work – and so on. I don’t care particularly since I don’t plan on upgrading to Vista for a year anyway, but it’s still interesting information. But the most interesting thing wasn’t the article itself, but the comments it received on a Swedish magazine site which summarized the PC World article. The usual bitching and moaning occurred: “Vista is slow” (I bet it is, so don’t use it), “Use Linux instead” (no thanks), “XP is better” (agreed, actually), “Vista is still in beta” (no it’s not; it’s just badly implemented) and on and on and on.

Then, all of the sudden, comes an interesting exchange of comments:

“Apple changes architecture with less problems than MS changes Word format!”
“There’s a difference between changing architecture for 4% of the users compared to changing something for 95%. If you’re using Apple you’re stuck with Macintosh – it’s a simple task to write drivers for such a narrow range of hardware.”
“[Inane comment snippet snipped] Well, I’d rather be locked in by Apple than shut out by Microsoft.”

I found that last comment very interesting because it reflects my own view about Microsoft and Linux: I’d rather be locked in with this proprietary OS and have all the functionality and programs I need, compared to use Linux. In theory I like Linux – Open Source is appealing, Ubuntu and that ilk looks pretty nifty, I love the customizability, and so on. But for my uses Windows XP beats all and any Linux distribution hands down, because I feel shut out from Linux.

All metaphors and similes break down eventually if you examine the objects in question close enough, and so does my simile between the Apple/MS and MS/Linux situation. I’m generalizing horribly, but one could view things this way:

Mac: Small user base, small set of proprietary programs, low customizability.
Windows: Large user base, large set of proprietary programs, low customizability.
Linux: Small user base, large set of open programs, great customizability.

I can hear the outrage of Linux/Mac users. “What about stability, what about look and feel, what about this and that!” All good points, I’m sure, but since I’m trying to come to a conclusion instead of complicating things up even further I’ll disregard all of those things.

Now, looking at my small summary of the OSs, what can we observe?

First, that Mac users might have a stronger feeling of community compared to Windows users due to the smaller set of users and programs. This would be an excellent reason to feel shut out by Windows. But – to show where my simile above crashes and burns horribly – there really is no similar case between Windows and Linux. And to confuse things even further, I like Mac OS X but I feel shut out by that as well! If I extrapolate even further from these facts I eventually come to this little list:

  • Mac users feel alienated from Windows.
  • Windows users feel alienated from Linux.
  • Windows users feel alienated from Mac.
  • Mac users probably feel alienated from Linux too.
  • Just as Linux users probably feel alienated from Windows and Mac.

What’s the conclusion, then? That OS debates eventually break down into territorial pissings and a case of liking what you’re used to – especially the programs you’re familiar with – so it’s bloody ridiculous to even try to be objective.

“Hey, what about all the examples of people who’ve abandoned Windows for Linux or Mac?”

If they were used to Windows and knew how to use it properly there wouldn’t be a cause for them to switch.

“But I knew this guy who had used Windows for ten years and then fell in love with Linux! Doesn’t that invalidate your comment above?”

I doubt that he used Windows properly then – I’m betting that he forced himself to use tools he didn’t like.

“Programs are irrelevant! There are always applications with equal functionality on all platforms, so anyone can cross over to a new OS without any problems.”

Use your GIMP if you like it. I don’t, though!

So there.



Swedes are Getting Dumber

May 16th, 2007

On a Swedish IT news site there are a few interesting headlines; one of which is that Sweden is “best” in Europe at using the Internet. (Link; beware – it’s in Swedish.) First of all, let me object to the word best. Let’s see. What constructive criticism could I conjure against that use…? Maybe…the fact that it’s complete and utter bollocks! Best is a marvellous word for quantifiable comparisons within a clearly measurable area, but in what way is Internet use a measurable area? And what exactly would “best” imply? That we’re best in Europe at finding warez? That we waste time on the Internet instead of working? That we know how to write good Google keywords? The phrase is completely ridiculous and says nothing at all.

And on the note of Google, there is another headline at the same site: Why Google is Making Us Dumber. Basically, that article insists that Googling stuff makes us dumber; for instance we no longer do conversion arithmetic by hand (or by head, rather) – instead we use Google features for those kinds of things. Well, let’s see if I remember my logic classes; I’ll try to make a modus ponens situation out of this. But I’ll leave out the predicate logic.

If P then Q, where P = “increased Google use” and Q = “getting dumber.” And I’ll introduce R = “increased Internet use” as well, and state the intuitive hypothesis that if R then P. Then we have the following:

(R -> P) AND R
-> P

(P -> Q) AND P
-> Q

Thus, Swedes are getting dumber. If you trust strange logic and strange articles you read on the ‘net, that is.

I won’t really waste any time on disputing the “Google makes you stupid” claim – it’s clearly ridiculous and a prime example of backward-thinking. The same was said when calculators were invented. “Oh no, the kids won’t learn how to do simple arithmetic anymore now that they have a machine for it.” Granted, I expect that kids today can’t do simple arithmetic, so I guess this example sucks. Still, I’m convinced that the productivity gains from automating simple tasks vastly oughtweighs the small setbacks in basic knowledge.

But wait, there’s more! I have yet another point to this little blog post. Some people might have read my little rant about coffee, in which I claimed that coffee was the cause of major wars. I received some interesting (IRL) feedback to that; most who commented on the post were confused and didn’t really see the point of it. That’s okay, ’cause I was planning on bringing up the point later – like now. In the coffee post I claimed, for example, that coffee was the cause of the War of the Golden Stool. That was complete and utter rubbish. Just as all the other coffee-related anecdotes in the post. Have you guessed the common thread through this blog entry by now? No? Okay, I’ll continue.

The post sounded confident and it was backed by enough facts to make it believable; no one really cared to question my claims since the topic was dull, but I have seen many search entries for the War of the Golden Stool that led to my site. I keep imagining that some kids have used my lies as interesting anecdotes in their schoolwork, and that a few teachers are scratching their heads in confusion right now. I hope that both those teachers and those kids have learned a valuable lesson about using things on the Internet as resources for their essays. There’s basically no guarantee that anything you read on the net is true, regardless of the imagined authenticity.

This goes for the article about Google making people dumber as well: it’s a personal opinion backed by no facts. It doesn’t matter that a major Swedish IT news portal picked it up – it’s just as much rubbish regardless of who thinks that it might be valid.



Flash Memory Musings

March 9th, 2007

Today I browsed through various news articles and noticed that Apple are considering to incorporate more flash memory into its products. Subnotebooks will have it, and analysts speculate that the iPod brand will go cold turkey on HDDs and go for flash instead in a while. Also, Samsung have released flash hybrid HDDs. Flash seems to be mighty popular right now!

This got me thinking. It’s a well-known fact that flash memory has a limited amount of rewrites. A common number to throw around is a hundred thousand rewrites. Reading up on the issue a little shows that things aren’t really that simple… A hundred thousand rewrites seems to be a fictional number that someone conjured as a vague average: instead, what matters is the number of erase-write cycles. These number a whopping million rather than a hundred thousand; however, the mapping between erase-write cycles and rewrites isn’t that simple.

There are two types of flash memory: NOR and NAND. NOR flash memory is slow and expensive, but it has a full address interface which allows random access to any location. It’s basically comparable to normal RAM in that aspect. Apparently, it’s often used for BIOS and firmware for embedded consumer products. This type of flash was used in Compact Flash earlier, but later scrapped for the cheaper NAND. NAND also has faster erase and write times and longer endurance, and is used in most products these days. (In fact, I can’t see any reason that NOR is better for BIOSes or firmware. There’s nothing in the nature of those products that requre random access; it must be a typo in Wikipedia.) This type of flash is the one that has a million promised erase-write cycles.

But, and this is a big but: NAND only addresses memory sequentially. There’s no random access, so if you want to – say – read address 0x00ffc you have to read the entire block that contains that address…and the same goes when you want to write something down. In other words, large files will span several blocks and require several erase-write cycles. The last bit is what makes me sit up and take notice.

It’s also interesting to read what Samsung wrote a while ago:

The hybrid hard drive prototype uses 1 Gigabit OneNAND Flash as both the write buffer and boot buffer. In the hybrid write mode, the mechanical drive is spun down for the majority of the time, while data is written to the Flash write buffer. When the write buffer is filled, the rotating drive spins and the data from the write buffer is written to the hard drive.

Is it just me or does this seem rather ineffective, considering that HDDs have a relatively long lifetime – probably longer than the flash’s amount of rewrites. What happens if the flash gives in prior to the HDD? Is the write buffer required or can the hybrid HDD be used as a normal HDD after the flash has croaked? I hope the latter, but I fear that the functions are relatively hard-wired; HDD lifetime expectancy will probably drop, in other words. I know that people talk about switching all your hard drives every third year or so, but seriously, how many do that? I have hard drives that are probably older than some of you who read this. (At least the spambots; I’m sure that I have older HDDs than them.) I do not want a trend where life expectancy is dropping in favour of non-movable parts.

But then again, flash firmware have this neat thing called wear levelling:

This effect is partially offset by some chip firmware or file system drivers by counting the writes and dynamically remapping the blocks in order to spread the write operations between the sectors. This technique is called wear levelling.

I can only assume that similar techniques are implemented for the hybrid HDDs, and that the very nature of using the flash as a write cache rather than random access media will cause the wear to be level. Still, balancing out the wear doesn’t cover the fact that erase-write cycles are pretty limited. I wonder how many blocks are accessed per day in normal computer use; speculating about life expectancy would be much easier if I had some real numbers to mess with rather than throwing up vague ideas that “HDDs last longer than flash.”

It’s strange, really: I’ve been working with flash-based embedded systems for a while, but I never had a grasp of the hardware involved. It’s probably a good thing to read up on things now and then.