Sunday, December 09, 2007

Looky here, who wants a giant network accessible harddrive which won't let you share the 26 most common filetypes. Honestly, why would anyone buy a piece of hardware which is unbelievably crippled and completely unsuitable for the purpose it's being advertised for. And more importantly, why would someone design such a horrible piece of hardware?

On a similar, but more sinister note, if america thinks you might be guilty of a crime, then they feel they can just kidnap you. The mentality of guilty until proven innocent really is showing through here. I wonder what would happen if i kidnapped an american citizen and brought them to ireland because i suspected them of a crime. I'm sure America wouldn't view that too kindly.

Thursday, December 06, 2007

My christmas exams (worth 35% of my degree) are getting closer and closer. Pretty much every day starting from last saturday has turned into a 10:30am->7:30pm study-a-thon. I'm trying to learn in the space of a 10 days what i should've been learning over the last 12 weeks. It's mostly working.

As a result, i've become a minor insomniac. Tis only 2pm, i've only been trying to sleep for 3 hours now. This describes it pretty well:

Monday, November 26, 2007

In two short days i'll be flying out to madrid for the mono summit. Woohoo! Turns out i'm missing my last 3 days of term, in which i should be getting two tutorials and our first problem sheet for a particular subject. So, not the best time to be fecking off to a foreign country.... ah well!

Also, if that F-Spot guy is about the place who was asking me about upgrading F-Spot to use the latest libgphoto2-sharp, give me a shout via email and let's organise something before i head over. Otherwise, just try and find me between wednesday evening and friday evening.

p.s. if i fail any exams because of this, i'm holding Aaron responsible, he tempted me over with a Chicken Pag! So no new MTP code for banshee if i fail ;)

Wednesday, November 07, 2007

I heard some great news earlier in the week and just got around to checking it out. There's another C# implementation of the BitTorrent protocol being developed! Good stuff! They've been working on it for about a year, and have "the core of the beast" pretty much complete. Thats great stuff! Considering i thought i was the only person with a C# based bittorrent implementation, i thought it'd be great to have a chat with them, see how they solved different issues etc.

So, about 15 mins ago, i loaded up that link and decided to download their alpha client, just to see how it works etc. Thats when i became seriously pissed off.

Without even having run the 'alpha client' i realised that the seedpeer codebase was actually MonoTorrent*. Yes, these guys had the cheek to take my code and pretend it was theirs. They tried to pass off about 18 months of my development time as their development time, without so much as giving me credit or mentioning that they were, in fact, lying when they said "we have almost finished coding the core of the beast". The worst thing is, they weren't even smart enough to do it properly.

So, for anyone out there who has actually used SeedPeer, or has a blog who've blogged about it, please try to make it known that seedpeer == MonoTorrent, except that they don't even use all the features i have available. For example, here's there 'TODO' list:
  • MultiTracker
  • Torrent Creation
  • Custom DHT (faster)
  • Rss Feeds
  • Integrated Search Function (with commenting)
  • Custom Peer Exchange
  • Fast Resume
  • Global/Per torrent Settings
  • Download Priority
  • more...
Here's a list of the features that are already in monotorrent that are in their TODO list:
  • MultiTracker
  • Torrent Creatio
  • Rss Feeds
  • Fast Resume
  • Global/Per torrent Settings
  • Download Priority
  • more...
I suppose this is what you expect by making the code public. People will use it (which is what i want), but then they'll try to pass it off as their own (which is just plain wrong). So, if anyone wants to email the seedpeer people, their contact form is here, so feel free to let them know you know ;) Also, if you want a nice GUI for monotorrent, check out :)

* Yes, i am 100% sure of this. I haven't used reflector to verify, but i'm 100% sure that at least 95% of the code is a direct copy/paste from monotorrent. It might even be 100%. If someone is bored, they can use reflector (or whatever) so they can see i'm not lying. No better proof than looking yourself.

Also, if the seedpeer guys do read this, using an obfusticator won't prevent me from recognising my own code. I can think of a dozen ways offhand to check if a client is my client without having to see the code. It's ridiculously easy.

EDIT: For those of you who know these things, The middle paragraph of the MIT/X11 license states:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
Does that mean if they distribute a binary, they have to include that license header in some form or other?

Friday, November 02, 2007

I decided to install the English dictionary in firefox today so I could get proper spelling corrections when writing emails and whatnot. So I fired up Firefox and browsed my way over to here to grab the English dictionary.

So it turns out that 'American English' is actually 'English' whereas 'British English' is what i wanted. Now, considering 'English' originated in Britain, why the hell do americans insist on referring to 'American English' as 'English' and referrring to English proper as 'British English'. British English is 'English'. You're the one's who created your own seperate spelling and dialect.

It's the equivalent of referring to 'Canadian French' as 'French' and referring to French proper as 'France French'. C'mon guys, you didn't invent it, stop trying to make it sound like ye did!

On other news, my macbook is in for repair for an unknown amount of time after it just stopped booting up. So i'll be MIA until it gets back to me. I'm still contactable by email should anyone need to.

Sunday, October 14, 2007

So, i'm looking for a bit of help. It's not a particularly hard task, it's just it'll take me a while and i need to go do some study. Basically, i need a method which parses XML-like data and adds the element name and the element contents to a Dictionary. It is fairly trivial, but i don't like the idea of having to think through all the string mangling required ;) First person to get me a method which works gets 10 brownie points.

You need to be able to handle something like this: (sorry for pasting there, but blogger borks on the XML tags)

Thing is, this is, technically speaking, invalid XML. The tag should be escaped, and the '&' in ObjectFileName should also be escaped. Unfortunately, i can't rely on the source of this XML being fixed any time soon, so i need to be able to hand parse the XML. I've outlined a procedure at the end which should be able to handle most of the mess that will be thrown at it. The only thing i'll add is that the manual parser doesn't have to cope with every eventuality. If something goes wrong (for example an artist called themselves which breaks the parsing) then don't worry, just abort. This is a last ditch effort to parse. It should succeed 99% of the time if you follow the procedure below.

Here's some psuedo code:
while(currentIndex < nexttag =" string.IndexOf('<'," elementname =" string.SubString(nextTag,">', currentPosition); // read the element name

currentPosition += the number of characters i've just 'parsed'.
string data = string.SubString(currentPosition, string.IndexOf("'); // The contents are between currentPosition and the end tag.
currentPosition += data.Length;
currentPosition += length of closing tag;

if(nextCharacter != '<') AbortParse(); // If the next character is not '<', then something has gone wrong, so give up. myDictionary.Add(elementName, contents);

Saturday, October 06, 2007

When i read the news about gcc supporting CIL a few days ago, the very first thought to cross my mind wasn't "Wow, that's really cool", it was "Wow, i wonder how many people are going to start shouting to not use GCC anymore because it's 'contaminated' with microsoft code and you'll get sued if you try use it for anything".

So should i take it as a good sign that there's no uproar? Have people realised that implementing an open specification cannot get you sued? You never know, it might have happened. Still, it's an amazing achievement. I can't say it's something i'm ever likely to use, but it does have some nice possibilities.

Wednesday, October 03, 2007

With the mono summit coming up soon, I'm hoping to be able to spare the time from college to head over for at least 2-3 days of it. It'll be interesting to get together with a bunch of people for some hacking or some talking or whatever's going on.

So, if anyone's out there who's interested in talking about/hacking on (or redesigning ;) ) any of the projects i've been working on, or projects similar to them:

MonoTorrent: My pet bittorrent library project
Lunar Eclipse: The xaml designer for silverlight
libgphoto-sharp: The csharp binding for libgphoto - for MTP based media devices
Banshee MTP support: using the above binding

do give me a shout and i'll see what can be arranged. I'm already looking forward to getting some hacking in on F-Spot over the week to port it to the new libgphoto-sharp bindings. Should be fun!

For anyone else who's kinda undecided about going, ah sure gwon! Tis a bit of craic ;)

Friday, September 28, 2007

So over the last few weeks i've been working on improving the C# binding for libgphoto2.. Thankfully, the backend C wrapper was in fairly good shape with thanks to the guy who originally developed it, and trickv, who became the maintainer of it and is the guy responsible for implementing MTP support in banshee.

For me looking at the code as a newcomer to the libgphoto-sharp 'team', the first thing i realised was that the c# api was a direct copy/paste of the C api. There was no proper frontend which simplified the use of the libghoto2 library. For example, to get a list of connected devices and then connect to a specific one required detailed knowledge of the libgphoto2 API and over 100 lines of code and also required you to be very careful about disposing of objects correctly.

So, my first task in getting full MTP support in banshee was to write up a new API for libgphoto2-sharp which hid all that nastiness from the end-user. The new API is, i suppose, 80% complete. Quite a few of the methods in the API are blocking, and so asynchronous equivalents will have to be added. One of the more immediate benefits is that detecting and connecting to a camera takes 3 lines of code now ;)

So, if anyone out there wants to use the new simpler API (fspot and banshee devs, i'm talking to you) you can check the code out with this command:

svn co gphoto

The binding should be considered API unstable until (probably) the release of libgphoto3.x.

Friday, September 21, 2007

As both a cyclist and a humanitarian, i think i'm fully qualified to say: Women with babies should not be allowed out in public.

"What are you on about" i hear you ask. Well, the reason is simple. Whenever a woman sees a baby, they instantly become hypnotised by it's hideous features and are reduced to mumbling cryptic phrases such as "Goochie goochie goo" and "Who's a big boy then?" to said baby. Unfortunately, this distracts them from real-world issues such as how to cross the road safely.

I was cycling to college yesterday, as i've done for the last 3 years (i cycled to secondary school for the 6 years before that and primary school for 3 years before that). All was going well, i was a mere 2 minutes from my house and was just preparing to take a left turn (in ireland we drive on the left-hand side of the road) when all of a sudden, a woman with a baby decided to cross the road in front of me whilest talking to her baby. So there i was, travelling at approximately 25 miles an hour with a baby in a pram being pushed by an idiot a mere 15-20 metres in front of me.

I jammed on the brakes, the back tire locked, the wheel skidded on the wet ground and i went flying. Whilest i got away with some scrapes and bruises, my MP3 player wasn't quite so lucky. I only got music from the left earpiece. I was pissed! So, as an electronic engineer, i tried to fix it. A quick google got me instructions on cracking open the case, so i did.

Initial inspection made it look like very minor damage:

However as i gently poked it, more and more bits started coming off:

Finally, by the end of it, the entire lefthand side was toasted and the top bit was also completely broken off. I was none to happy:

So, my task now is to find something to wedge along the side of the earphone jack which will hold the metally bits in place. Everything works fine at the moment, but unless i support those metally bits, they will bend back out of position through use. Worst case scenario, it's a 30 gig portable harddrive which can play films on any tv via TV-Out. Still pretty useful.

Monday, September 10, 2007

I don't care how i get one, but i need one of these!

Clicky Linky.

It looks so, so, so nice! I just wish public free wireless was as prevalent in ireland as it is in the US.

Thursday, September 06, 2007

Have no fears my American friends, your President has your best interests at heart. Check out this interview where he reveals where that 50 billion in defense money is going:

Wednesday, September 05, 2007

I also care about the issues!

[quote]Sun Microsystems Demands University Study Retraction

The University of Washington, apparently hoping to capitalize on the recent hype around their controversial study on Baby Einstein™-style videos, followed up yesterday with another, similar study. In the new study, researchers found that Java programmers understand an average of seven fewer Computer Science concepts per hour spent with Java each day compared to similar programmers using other languages. Sun calls the study "seriously flawed", citing the fact that you can combine the names of Gang of Four Design Patterns to form new Computer Science concepts that all Java programmers understand, such as the ObserverFactoryBridge, the BridgeFactoryObserver, and the well-known FactoryObserverBridgeChainOfCommandSingletonProxy, beloved of Java programmers everywhere. Java experts at Sun say they're not sure how many combinations there are of the twenty-three pattern names, but there are "definitely a lot of them."

It's true. Java programmers do have a tendency to not be familiar with the new programming paradigms that Web 2.0 bring out daily. So, in an effort to do my bit to help these poor developers, i decided to calculate exactly how many combinations of the 23 pattern names there are. After spending hours with a pen and paper doing lots of complex multiplication, differentiation, division and.... addition, i came up with this answer:

There are 7 possible combinations of the 23 names.

So for all you java people out there, there's not much to remember. You can thank me later, a pint will do.

Saturday, September 01, 2007

So the summer is finally at an end. I've returned back to Ireland from what was probably the best summer I've had in a while. I got to work on something useful, with some cool people, on a project that is single handedly going to destroy all of linux (if you believe the people in groklaw).

When i arrived over to Boston just over 3 months ago, the team were in the middle of their 21 day hackathon to get a working demonstration of Silverlight working on linux. So, i was tasked with the job of creating a visual designer for silverlight. Of course, the first thing i did was panic. I knew nothing of silverlight, nothing of xaml and had 12 weeks to produce something useful.

So, 12 weeks after i started, i managed to produce something ;)

Basic features: Video
All your standard features are there. You can select items, resize, rotate, move, alter properties through the property pane on the right. You can undo and redo (most things are undo/redo-able).

Animation Recording: Video
The basics are there for recording animations. Not everything can be animated as of yet. The supporting infrastructure is all there. Its just currently it can only animate properties which take doubles as their value.

You can do cool stuff like record several keyframes, then move them around to make the time longer, or shorter, or you can completely rearrange the keyframes so things happen in a different order. You can also seek along the storyboard and see the positions of the elements at different times in the animation. It's not quite up to the standard in Blend, but it's usable ;)

Sample Animation: Video
This is an animation which was created entirely using Lunar Eclipse. Everything you see was done through the IDE. The xaml was then copied and pasted into a textfile and the only modification it needed was to alter the xaml so that the storyboard started as soon as the canvas was loaded. Other than that, the xaml used was all auto-generated in the designer.

So, the overall goal of this designer is to have a good base which can be integrated into MonoDevelop in the (hopefully near) future. As the designer is going to be written in Silverlight, it should be relatively easy to stick a web-based frontend on it and use it through a web-browser. How cool would it be to be able to create little animations all through a browser!

There is still a tonne of work which needs to be done, the next part of which is to make the XAML a little less verbose and the animations able to animate more than just doubles. After that, who knows! There's a bunch of stuff not done, a bunch of things which probably need updating or extending and a bunch of testing to be done.

So all in all, it was fun this summer. At the start, nearly every time i updated the moonlight codebase from the repository, either it broke my code or i found new bugs. Nearly every time i reported a bug, it was fixed within 30 minutes. Thats what life is like on the bleeding edge. I have to say, toshok was amazing, i definitely owe him a pint. I wouldn't be surprised if every time i wrote the word "bug" in irc, he shuddered. He was usually the guy who ended up fixing my problems.

For the rest of the people in the office in Cambridge (Miguel, Jeff, Aaron, Garett, Guy (if i spelled that right) and whoever else i've forgotten), it's been great! If you're ever around Ireland, gimme a shout. I know the best place for sushi ;) If not, sure i may see ye at the Mono Summit. Who knows.

The only question i have after the summer is: If he can't swing from a web, what can he do?!

Tuesday, July 31, 2007

Tis that time of the month again! MonoTorrent 0.13 has been tagged in SVN, so tis time to update again. As per usual, there's a long list of changes. The biggest of which has been the commit of the latest version of the piece picker. It's a big step up efficiency-wise as compared to the older piece picking algorithm. While the core algorithms are essentially the same, they've been refined and honed into a sleek beast of a class ;) Here's a partial list of changes, but the piece picker change is the biggest reason to update sooner rather than later.

If a peer sends data too slowly, its piece requests are reassigned to another peer
Smarter piece picker - Allows multiple peers to download the blocks from the same piece.
- Has significantly less overhead than with the old piece picker code
- Significantly faster endgame mode with the new code
Optimised Socket.BeginRead to need approximately 15% less calls to BeginRead.
Optimised the loading of fast resume data.
It is possible to safely call .Stop() on a torrent while it is hashing.

Peers sending a HaveAll are correctly marked as seeders
Fixed bug which resulted in crashes under MS.NET with encryption code
Fixed potential issues with ConnectionListener race conditions
Stale fast-resume data is deleted so it isn't reloaded by accident

Added a 'Complete' bool to TorrentManager
Exposed the PeerEncryption stuff so it is possible to tell what encryption method a connection is using
Marked EngineSettings and TorrentSettings as Serializable
It is now possible to create .torrents in a non-blocking way using the async BeginCreate method. The operation is also cancellable.
All eventargs now contain a reference to the TorrentManager that fired the event.

Monday, July 16, 2007

Work has been progressing steadily on Lunar Eclipse, although not quite as fast as i'd hoped for. At the moment, my big challenge is coping with resizing while shapes are rotated. Unfortunately, it doesn't seem that easy to deal with. I've attached a video which demonstrates the issue. Check it out here:

When a shape is in it's normal rotation (i.e. angle is zero), the change in Y coordinate in the mouse can be directly related to a change in height of the shape, and a change in X coordinate in the mouse can be directly related to a change in width, so everything resizes nicely.

However, when the shape is rotated by 90 degrees, a change in the X coordinate now relates to the height, and a change in Y is the width. At other angles, a change in 'height' is contributed to by both the X coordinate and Y coordinate change in the mouse.

The problem is that i don't know how best to handle this. The main issues are the following:
1) The amount of resizing to do should be based on the how far the mouse moves in the 'right' direction. i.e. if you move the mouse perpendicular to the side of the shape, the side should expand to follow the mouse
2) When resizing, the shapes shouldn't float around. As you can see towards the end, when i resize the rotated shape, it physically moves around the canvas.

So, if anyone has any advice on how to tackle these issues without tonnes of helper code, that'd be amazing. Currently, things are looking like they could get fairly messy trying to get this to work.

If anyone wants to check out the code, it's available at:

Wednesday, July 11, 2007

While i knew that creationism had found a home in america, i did't realise they had dedicated museums for it. Still, it's nice to know that the traditional values of common sense, reason and logic are being held up, like a bright torch, to lead us on into the future. I'd hate to think stupid things like scientific evidence could get in the way of a proper education as to how things happened way back when.
Since my last blogpost several things have been happening in MonoTorrent. Firstly, i finally managed to track down what was probably the most elusive bug to date.

Here's an abridged version of the events.

I had been getting reports on and off of strange crashes in the BigInteger class. These crashes appeared to be memory corruption problems of some sort. Needless to say, i immediately suspected that the reporter either had faulty ram or a highly overclocked system or even a corrupt .NET framework install. However, after getting him to run Memtest and a few other tests, i had to rule that out as a cause. However, as i couldn't reproduce, and he couldn't reproduce, there wasn't much i could do.

Time passed, several weeks i think and i was getting him to log every access to the class for me. Two other people had reported the same bug, so i knew something was definitely up. The only thing is, i couldn't reproduce it, no matter what i tried! I was hammering the code with dozens of new connections a second and getting no crash.

I had been in touch with Sebastien Pouliot who was doing his best to help me with tracking down the bug. Eventually i got fairly pissed off about the whole thing and decided it was time to solve this once and for all. I had already wasted hours trying to reproduce this at this stage, so if i didn't get it fixed this time, i was just going to completely disable the encryption code and thus "fix" the issue.

I logged into the windows machine i had in work, coded up a quick testcase which hammered the BigInteger class with random calculations. This didn't break after a fairly lengthy time running. Then i added 10 threads performing the calculation, as this was a 4 processor machine, so this way i could check more numbers at a time and so (hopefully) get a crash sooner.

BANG! I had reproduced.

I then spent the next hour or so (wasting some of both miguels and sebastians time) in finally tracking it down to a compiler bug in gmcs. The conditions for reproducing the bug were fairly strict, which is why i never managed to reproduce it myself.

1) You must be running under MS.NET 2.0 (meaning you have to be on windows)
2) You must be running a multi-processor machine
3) There must be more than 1 thread running a BigInteger.ModPow calculation simultaneously.
4) It has to be Wednesday ;)

The quick fix was to just compile the big integer code using the microsoft compiler. The only thing i'll say is i pity the person who has to track down the bug in the compiler, it's unlikely to be easy.

Friday, June 29, 2007

MonoTorrent 0.12.1 has been tagged in svn. I'd recommend anyone running MonoTorrent under MS.NET to update to this revision as soon as possible. There is an issue in the Mono.Security lib whereby you'd get strange exceptions in MS.NET if more than one calculation of BigInteger.ModPow() was being run simultaneously.

Thursday, June 28, 2007

I've just tagged monotorrent 0.12 in SVN, so anyone using the library in their application is *highly* advised to use this new release. It contains several very important changes which affect library performance in a major way. Check the forum for a changelog of the important stuff ;)

Wednesday, June 27, 2007

I'm now officially into the third week of my internship at Novell. Tis interesting stuff. I've managed to put faces to a lot of names and learning a lot of interesting things! Since i've gotten here, i've been working a bit on moonlight. I wasn't quite part of the hack-a-thon as i have zero knowledge of C++, but i did get the fun job of excerising the API and pointing bugs over and over until people went insane ;)

As you may or may not know, it's Hack Week at Novell. What this means is that every novell employee gets to work on whatever project they want for this week.

While my appointed task for the summer is a Silverlight Designer, i'm also doing that for hackweek as it's great fun! The idea is to end up with a visual designer for silverlight which can run from a WinForms GUI, a GTK# GUI or just from your browser! At the moment i'm implementing it with GTK# as it makes my life easy, but sticking a new frontend (i.e. web based frontend) should be relatively simple.

So, you might be wondering what the designer can do at the moment. The short answer: nothing useful. The long answer: lots of stuff!

At the moment you can draw any of the basic shapes: Squares, Rectangles, Ellipses, Circles, Lines and Pencil. There's support for undo/redo. There's support for converting your amazing drawing into XAML (there will soon be support for allowing you to hand edit the xaml and load those changes back into the drawing surface).

So all in all, nothing useful, but it still does a lot ;)

Miguel was nice enough to upload my screencast to youtube which demonstrates the amazing awesomeness of the designer! By the end of the summer i'd hope to be able to record animations in the designer so you can easily create that stunning animation to incorporate into your website/desktop application/whatever.

In other news, MonoTorrent has gone through an extensive bug-fixing phase which fixed a lot of those annoying bugs which i'd never managed to track down before. The incentive to keep me hunting for these elusive bugs was due to Andy Henderson. Thanks a lot for all those charts, graphs, explanations and reports andy! I couldn't have done it without you.

Here's a partial list of recent changes/fixes:
* Allocating less byte[], but making them larger. This results in less objects being pinned, less heap fragmentation and (hopefully) slightly better performance
* Fixed crash when trying to parse UDP trackers
* Fixed bug where threads wouldn't stop correctly when calling Dispose on the engine
* Incoming encrypted connections being received correctly - thank you azureus for hiding this from me
* Individual files now containing their % completion
* Fixed potential threading issue in encryption code
* Rewrote the public API
* Fix to make sure that both global and per-torrent connection limits are obeyed
* Fixed typo in ratelimiting which meant i was getting slightly less stable rate limiting than i should be
* Starting a torrent manager which has not been hashed and doesn't have fast resume data now works correctly.
* Added option to enable/disable have message suppression - Recommended value is to Disable have supression.
* Fixed several critical issues in piece picking code
---- Reduced CPU usage hugely
---- Reduced memory usage noticeably in certain scenarios
---- Rarest first algorithm now chooses rarest pieces as opposed to most common pieces (doh!)
---- Rarest first algorithm falls back to trying to choose less rare pieces if all the rarest pieces are already downloading
* Now handling older clients which request 32kB pieces correctly.
* Correctly dropping incoming connections when the TorrentManager is not running
* All download/upload speeds and amounts are given in kB/sec and kB respectively
* Fixed bug where large single-file torrents wouldn't load correctly - multi-file torrents were unaffected
* MessageTransferred event firing for incoming messages.
* Updated the PeerID code to make it *much* easier to use
* Fixed bug where an array out of bounds exception would be thrown when trying to request the last piece of a torrent whose number of pieces was an exact multiple of 32 (that was a fun one to debug ;) )

So these changes basically mean you have a *much* faster, more stable, better monotorrent! I have a few more ideas on reducing memory usage and CPU usage which shouldn't take too much effort to implement. I just have to find the time ;)

Saturday, June 09, 2007

In less than 24 hours, I'll be flying off to Boston for the summer with only a small suitcase, a bicycle and a guitar to keep me company. I've been lucky enough to be accepted as an intern with Novell for the summer. I get to be chief coffee monkey and photocopier for a few months, isn't that great! ;)

This is an amazing opportunity, firstly i get to work on Mono (or related projects). That's gonna kick some serious ass. Secondly i get to visit the states for more than a week or two, which i've never been able to do before. This means i get to visit everywhere i want to visit as opposed to being short on time and having to compromise. Thirdly, it's just frikin awesome ;)

My only hope is that i have enough time to do everything i want to do over there. Visit NY, go to the Cheers bar, see the sites, visit my friend in Pennsylvania, go sky-diving (or not... who knows). So if anyone has any suggestions on what i should visit in Boston, let me know. I don't want to miss out on something good.

Friday, June 01, 2007

A few weeks ago, myself, Miguel, Aaron and Michael were roped into being in a podcast for the Google Summer of Code. So if anyone wants to tune in and take a listen to it, it's available here.

For those of you taking part in the SoC this year, it should be worth a listen!

Monday, May 28, 2007

In an interesting turn of events, CSS (the "copy protection" system used on regular DVDs) is no longer considered an "effective technological method" to prevent copy protection under the law. The direct result from this is that it does not qualify for the European DMCA's anti-circumnavigation laws.

The European DMCA only bans the circumnavigation of "effective technological methods" thanks to an amendment what was pushed through a while ago. It seems that once a method is well and truly broken then it's no longer illegal to circumnavigate the DRM on your favourite DVD's, Bluray discs or HD-DVD discs.

More info here:

Sunday, May 20, 2007

I gave mono.nat a onceover yesterday and chopped about 15% from the codebase through an ingenious set of refactors (basically i turned proof-of-concept code into maintainable code).

It has more complete support for uPnP than before, i think i now support every method exposed by the routers. So if anyone wants uPnP support in their application, grab the source from the MonoSVN (websvn link: in the "Mono.NAT" folder.

For an example on how it works, look at the program in the Examples folder. And as always, send me all bug reports. Finally, if you have a uPnP router, and it is *not* detected by the example program when it's run, or some of the methods do *not* work as expected, let me know and i'll get some special debug builds to figure out the problem.

Finally, if anyone wants to supply me what a NAT-PMP capable router, or knows how i could easily host a NAT-PMP "server", i'll implement support for that aswell (over the coming weeks).

Saturday, May 19, 2007

For those die-hard fans of Starcraft who've been waiting eagerly for Starcraft 2 to appear, you need wait no longer. Click. While i don't expect this game to hit the streets for quite a few months (if not a year or more), it's because of scenes like this that i can't wait to start playing. Thankfully Blizzard are the ones developing the game. I have full faith in them to not screw it up like so many other sequels.

Hell, if we're really lucky there'll be an open beta (or maybe a closed beta and someone gets me an invite ;) ) where we can test everything out for some multi-player zerg rushes.

Wednesday, April 18, 2007

Brilliant. Despite telling customers that there would be no change in the anti-copy protection on their latests DVD's, Sony has backed down. They're now issuing exchanges for every faulty disc.

The only thing i find odd is that they claim to have received complaints for less than one thousandth of one percent of affected discs shipped. Now, the pedant in me says instantly shouts "false statistics". Whats the point in giving stats based on units shipped. Surely the only stats that would matter would be on units sold. Obviously if you have 1000 discs in-shop, you're not going to receive complaints until someone buys them.
Despite AACS being broken for both BluRay and HD-DVD once already, and a huge list of keys for DVD's being created for both HD-DVD and BluRay with some parts of AACS permanently bypassed, these mega-corps still haven't learnt that DRM does not work.

Once again, in a desperate attempt to make their discs copy-proof, Sony has managed to screw up severely. A year ago, Sony decided that they'd install a secret rootkit which could not be detected by regular means on computers which attempted to play their Audio CD's. As bad as that was, at least the CD's were playable!

Now they've come up with a new ploy: Make their DVD's unplayable on their own DVD drives. If you can't play it, you can't copy it. To be honest, this must be a PR nightmare for them. They are knowingly selling bastardised imitation DVD's that do not play on dvd drives. Hell, they are still selling their own DVD drives without a warning that it will *not* play new sony DVDs.

I suppose the worst thing out of this is that other dvd drive manufacturers may be forced to spend time and effort updating their drive's firmware in order to allow their customers to use their legitimately bought entertainment discs.

Saturday, April 14, 2007

I had a rather interesting experience on the bittorrent mailing list recently. There was a guy who had a server which was severely IO limited and he was trying to figure out how best to improve performance. The problem was that due to the randomness of how a piece is chosen to be downloaded in bittorrent, his server was having to do a lot of random disk seeking thus limiting his maximum upload rate to about 11-15MB/sec even though he had the bandwidth to handle much more than that.

There are hacks to avoid this kind of IO limiting, such as advertising that you have no pieces available When you connect to a peer and then telling them about selective pieces later. The idea is that the piece you advertise having, you'd buffer in memory and so when the peer requests the piece, you don't have to seek to disk to get it for them. With a sufficient memory cache you significantly reduce the amount of random disk access needed and increase throughput hugely

However, this has several disadvantages. The biggest disadvantage of this method is that it can (and does) reduce overall throughput. Take a situation where a torrent has 100 pieces. You put the first 10 pieces into memory and then send messages advertising those 10 pieces to everyone. If everyone already has those pieces, then no uploading will occur until you swap out those pieces and upload the next batch.

Secondly, you *must* disconnect every peer you've connected to every time you decide to swap out those 10 pieces. If you don't, the other peers will remember about the previous pieces you've advertised and may request those even though you no longer have them in memory. This means you once again have random seeking. However, the amount of random seeking would still be significantly reduced as each peer would only know about 20 pieces out of the actual 100 if they weren't disconnected.

Thirdly, this method of seeding requires special logic. Therefore if you want this kind of special handling you must either hack it into an existing open source client yourself, pay someone to do it, or find a client that supports this and use it (regardless of whatever other problems/deficits that client may have).

Now for the fun part: This issue was addressed in the Fast Extensions for the bittorrent protocol with the "SuggestPiece" message.

Suggest Piece is an advisory message meaning "you might like to download this piece." The intended usage is for 'super-seeding' without throughput reduction, to avoid redundant downloads, and so that a seed which is disk I/O bound can upload continguous or identical pieces to avoid excessive disk seeks.

As you can see, the Suggest Piece was designed to fix the exact problems i've described above, unfortunately due to either bad wording, or just bad interpretation, this message is going to fail miserably in that.

The SuggestPiece message is an "Advisory" message. My own opinion is that Advisory was added there to signify that you do *not* have to act on that message. The reason for this is that you could already be downloading piece 3 off peer X when peer Y sends a "SuggestMessage" suggesting you download Piece 3 off him instead. In this case, you would not follow the suggestion, you would just ignore it. That makes sense. It's logical. Downloading the same piece twice would just be stupid ;)

Unfortunately, several bittorrent developers who've implemented the Fast Extensions have mis-interpreted this as "You can completely ignore this message if you want to". One developer cited his reason for completely ignoring this message as "i have no use for this, so i ignore it". This is a stupid reason for not implementing proper handling for the SuggestPiece message. Even "I'm too lazy" would be a better excuse as you'd at least be acknowledging that you have an incomplete implementation.

A "proper" implementation of the SuggestPiece message would be to request that piece off the peer that sent you the suggestion at the earliest convenient time, i.e. the very next piece request off them should be for the suggested piece.

Lets assume that every torrent client implemented the suggest piece message handling as i described above. Now, i'll replay the situation above using SuggestPiece messages as opposed to selective piece advertising.

Firstly, you advertise having all the pieces as being available when you connect to each peer (this requires no special extra logic). Every time you receive a request from a peer to send them a piece, you load the entire piece into memory. As soon as that piece is loaded into memory, you send a SuggestPiece message (this requires no special extra logic) to every other peer. The other peers will then act on that message (this requires no special extra logic) and their next request from you will be that piece. That way every time you load a piece into memory once, you can then send it from memory to every other peer that wants it.

The benefits of this method are that every time you load a piece into memory, you can make other peers request it off you (if they need that piece). You will *never* have a situation where you will not be uploading. You will *always* have better performance than the initial scenario where you were IO limited. Assuming that you're still doing the same number of random disk reads as before, you will be able to increase your upload rate significantly because each piece that enters memory will be sent to more than 1 peer.

I'd guess that you could increase performance by, at the very least, 5 fold in a torrent with 100 other peers using the SuggestPiece method. The other major advantage is that you require no special logic in either the seeding client or downloading client! All you need is a proper implementation of the SuggestPiece message. It's not that hard!

EDIT: Despite promising myself i wouldn't, i think i will name and shame the clients that completely ignore the SuggestPiece message despite claiming support for the Fast Extensions:


The clients that i know of that fully support the extensions are:

Monday, April 02, 2007

I was just talking to J.D. Conley (part of the Coversant team. He also has a blog on the net) about some if the issues i've been having in monotorrent with threading and async sockets. He had a few good ideas about what i could try, so hopefully over the next week or three i'll be able to test out a few of the ideas to help improve performance. Anyway, as part of the discussion i realised that i could probably replace a lot of my lock(object) statements with a much nicer ReaderWriterLock().

The way the code works in monotorrent is that i read from my lists of peers much more than i modify them, therefore it makes sense to have a locking structure that allows multiple readers and a single writer! This gave me the idea for this blog post.

You see, the problem with the ReaderWriter lock is that you have to be careful in how you use it. It's not as simple as the lock(object) statement. If you forget to release a reader lock, or worse, a writer lock, you will experience deadlocking and it will be very hard to track down the exact place where you haven't released the lock!

Also, suppose you want to a writer lock, but you forget to check if your thread already has a reader lock, you will end up throwing an exception as you will not be able to get the writer lock.

However, it's very easy to abstract away all those horrible problems with two simple structs:

public struct ReaderLock : IDisposable
public ReaderWriterLock Locker;

public ReaderLock(ReaderWriterLock locker)
Locker = locker;

public void Dispose()

public struct WriterLock : IDisposable
private bool upgraded;
private LockCookie cookie;
public ReaderWriterLock Locker;

public WriterLock(ReaderWriterLock locker)
Locker = locker;
upgraded = locker.IsReaderLockHeld;
cookie = default(LockCookie);

if (upgraded)
cookie = locker.UpgradeToWriterLock(1000);

public void Dispose()
if (upgraded)
Locker.DowngradeFromWriterLock(ref cookie);

Suppose you have the following code snippet:
ReaderWriterLock myLocker = new ReaderWriterLock();

Instead of having to manually use: myLocker.AcquireReaderLock() and remember to call myLocker.AcquireWriterLock() in a finally statement, and instead of messing around trying to decide if you should call myLocker.AcquireWriterLock or myLocker.UpgradeReaderLock() you can just use the corresponding ReaderLock or WriterLock as needed inside a using statement!

For example, take this code snippet:



You can use the much nicer:

using(new ReaderLock(myLocker)

Supposing you actually needed a writer lock, just change the using statement to:
using(new WriterLock(myLocker)

The constructor for writer lock will then check to see if it should upgrade an existing reader or if it should get a standard WriterLock. If it upgraded, it will remember to downgrade it correctly later.

Using this method not only makes you need less lines of code, makes your code more readable, but it also completely removes the possibility of creating a deadlock by accidentally forgetting to release a lock!

Thursday, March 22, 2007

Just implemented an important bugfix for the last monotorrent release. Somehow i missed the fact that it didn't close connections properly ;) Fun stuff!

There's a new release up on for anyone who downloaded the other release.


Tuesday, March 20, 2007

I've opened a forum for monotorrent discussion which can be found here. If it proves unstable or crap, i'll hunt out better forum hosting, but it'll do for the moment.
MonoTorrent - Prebeta 3

So it's been a long time in the coming, but we now have MonoTorrent prebeta 3 released into the wild. Let fly with the bug reports!

Check out for the juicy details.

Also, i'd like to give a shout out to jbevain, whose amazing linker, based on the even more amazing cecil, combined with the equally amazing merge tool managed to reduce the size of the libraries i have to distribute down from 520kB to a mere 190kB. Zip that up, and you have a mere 90KB file to distribute. Not too shabby!

A nice (depending on your point of view) side effect of this was also the fact that i know have a single MonoTorrent.exe file to distribute instead of over 1/2 a dozen different files.

Not too bad at all!

Once again, i'd like to say thanks to David Wang, for coding the Encryption support. Also to David Muir for his bugreports and patches (i hope to get more off ya ;) ) and everyone else who has bugged me about problems, or asked me questions.

If i don't reply to your emails, it's because i've forgotten about them. Email again!

Wednesday, March 14, 2007

I'm just putting a shout out to David Wang who has been working tirelessly over the last week or two to implement encrypted transfers for MonoTorrent. I have to say, he's done a great job.

The current SVN has around a 90% complete implementation. As far as i can tell, there's still a bug in accepting incoming connections. Other than that, all that needs to be done is to make sure it works with more than just uTorrent and then it's good to go!

Thursday, March 08, 2007

I've been asked how i go about profiling, but i think the best explanation i could possibly give would be to look at this (very interesting) video.

Frederico talking at Fosdem

Other than that, all i can say is Reduce, Reuse and Recycle.

1) Reduce: Every object you allocate is an object the GC has to clean up. Know how many objects you're allocating and reduce the number if it's excessive. For example i once was calling DateTime.Now several thousand times a second because of the loop's i had written. By moving the call before the loops, i reduced the allocations from that by a factor of several thousand.

2) Reuse/Recycle: I used to allocate a new byte array every time i wanted to send a message to another computer via a socket or receive a message from them. This resulted in a *lot* of 16kB byte array's floating around. Instead i created a BufferManager class. From this class i requested a buffer every time i needed one and then returned the buffer back to the manager every time i was finished with it. This way i never allocated more buffers than i needed (about 40-60 buffers typically). Previously i could be allocating as much as 40 new buffers each and every second.

I might blog more about that later if people really want me to. But that all depends on how much people beg :P

Sunday, March 04, 2007

It's been a while since i sat down and profiled monotorrent, turns out there's not much that has adversely affected my memory optimisations that i implemented a while ago. Things are all still fine and dandy.

However one thing that i never liked were the large amount of allocations due to using the async sockets. It always annoyed me that the worst part of my code i had no control over. Well, it turns out that it isn't *just* the sockets causing those allocations, there's another thing i haven't looked at yet.

My timer!

I use a single timer in the library which fires about 20 times a second. This was originally needed as a way to make sure that certain events happened regularly. However, I've never liked this timer and i have been working to get rid of it. For example i removed the dependence of downloading pieces and piece picking from that timer (as much as is possible). Download/Upload speed monitoring is still dependant on the timer, so once i remove that dependence and the only other thing left is to make connecting to other peers independent of the timer. Currently i try to connect to a new peer every timer tick so long as i haven't reached the limit.

So, before i used to have an allocation graph that looks like this:

That big yellow bar is the allocations due to both the timer and the async sockets. There's a lot of execution contexts there :p After changing the timer interval to 1 second, i get the following allocation graph:

That tiny cyan coloured bar is the severely reduced amount of execution contexts being allocated. So before you all get to feel the benefits of that change, i have to modify a fair bit of code. Once thats done, i'll be releasing beta 3 which is (hopefully) a lot more stable than any previous version, and faster, and uses less ram, and uses less CPU and god knows what else ;)

Wednesday, February 28, 2007

Just to respond publicly to a few questions about monotorrent that have been asked over and over:

1) MonoTorrent does have a GUI! In fact, it has two. One of them is a console GUI, the other is a gui created using GTK#. This is *not* the same as winforms in .NET. It is an alternative framework for creating GUI's. While i consider the GUI very limited (which it is), it is usable. I would much prefer a Winforms based GUI so the application would be 100% portable .net with no other dependancies but i don't have the time to finish my 1/2 created one.

2) MonoTorrent does use less ram than azureus. Considerably less. Very very much so. Monotorrent does use more ram than uTorrent. But not that much more. If you disable the disk cache in uTorrent completely, monotorrent only uses 4-5 megabytes of ram more than utorrent. If you leave the disk cache enabled in uTorrent, then monotorrent uses slightly less ram than uTorrent.

3) MonoTorrent does not support encryption. I looked at the specs a while ago, found them confusing, created some initial support but never got back to it. Once again, time is a factor. I just don't have enough.

4) MonoTorrent is not a "Linux App" or a "Microsoft application". It's a cross-platform bittorrent library.

5) Yes, monotorrent can be used on whatever tracker you want. Private/public/personal, i don't care. It works. (Just so long as it's not a udp based tracker ;) ).

6) MonoTorrent is not related to the disease known as mono (also known as glandular fever in non-american areas) ;)

7) Yes, i do like people to contribute code. But please talk to me first and keep in touch when writing your code. The last thing i want to do is to say "no" after you spend 3 weeks writing a bucketload of code for monotorrent because i've already written the code, or you've done it wrong or whatever.

Tuesday, February 27, 2007

Heh heh heh, i feel all warm and fuzzy inside:

MonoTorrent and prebuild get slightly more famous ;)

Monday, February 26, 2007

BitTorrent Distributed Hash Table (DHT):

I got bored over the weekend and decided to take a break from hacking on the main MonoTorrent libraries and/or Mono.Xna and instead work a little on some DHT support for MonoTorrent.

Unfortunately the only documentation is this rather undetailed document. There's quite a few places where important details are left undefined and other things just aren't mentioned or just defined in vague terms.

I'm keeping notes on these things to hopefully get a more concise and detailed specification together so that other developers attempting to implement DHT can have a better spec to work off. Also it might help with removing bugs in existing implementations.

For example one detail that wasn't mentioned in the spec was that you should never send out the details for a node (another bittorrent peer) unless you have checked that the node is still active. Some implementations don't do this and so it's possible that you will still be getting DHT requests for a torrent you turned off several days previously.

Finally, i've been bugging the developer of libtorrent for some tips on my own implementation, so thanks to him for asking my questions! It's appreciated.

Saturday, February 17, 2007

MonoTorrent news:

MonoTorrent has gotten increasingly stable as time has gone on. I found the cause of a few long running bugs which have now been crushed into tiny tiny pieces. There shouldn't be any more ArgumentExceptions being thrown due to passing the wrong IAsyncResult into an End*** method. That was caused be me forgetting to clear out the Download/Upload queue when a peer is disconnected.

uPnP support has been enabled in MonoTorrent using Mono.Nat. So all you people with uPnP routers no longer have to worry about manually creating the port mapping in your router. It'll all be done automagically (all going well ;) ).

Disk writes are now fully asynchronous, but now will automatically throttle download speed if you are downloading faster than your harddisk can write. So you won't ever get 100s of megs of ram being used and 100% cpu usage when exceeding your write speed.

Upload and download speed calculations have been improved drastically (ish) for torrents. What i did before was calculate each individual peers upload and download speed, then sum up that for all connected peers to see a torrents overall download rate.

While this worked, it meant that any errors in calculation for each individual peer got magnified resulting in a fairly inaccurate overall download speed in certain situations. The quick solution was to give the Torrent it's own dedicated ConnectionManager.

There've been a few other minor changes here and there including enhancing download performance. All in all, some good stuff.
Mono.XNA - the good, the bad and the ugly

There's been a good few changes for Mono.XNA recently. Firstly, we've moved to google project hosting. The idea of having an integrated issue tracker, integrated wiki, integrated email alerts was just to tempting to resist. Also we now have the ability to give write access to anyone we want as opposed to having to go through novell to grant users write access to the mono SVN which can take time.

So, the new URL for the SVN/Wiki/Issue Tracker etc is here:

Feel free to drop in and start coding.

As part of the move, we've now toughened up on the contributing guidelines. Code must have tests, must follow the coding guidelines etc. Links to the relevant documentation about contributing can be seen on our google mailing list:

Also, a little on my own work on XNA. I "reverse engineered" the .XNB format about 2-3 weeks ago. The only implementation so far is the Texture2DReader implementation i coded together. It's probably the most basic .XNB type in XNA, so it should serve as a good example as to how to code future files together. Unfortunately even the Texture2D format took quite a while to figure out. For anyone trying to hack some code together to decode other types, it isn't easy. You will spend hours with a hex editor.... well, it's quite possible you're better at it than me, but it took me hours :p

Still, that's another step forward. Things are looking bright for XNA, all we need now is a dozen coders to start hacking and planning and coding their way through the classes and then we can really start getting stuff done!

Saturday, February 03, 2007

Asynchronous Process - Can't live with it, can't live without it

There are often times where you have two tasks running simultaenously at different speeds which depend on each other. For example, imagine transferring a file from computer X to computer Y. The simplistic approach is for computer X to transfer a part of the file to computer Y, wait for computer Y to write the data to the disk, then transfer the next part. This is a "pipelined" approach. It's good enough for most purposes, but if you have a large communication delay between the two computers, performance will suffer. However this method is very stable as you know exactly how much memory is needed and how much CPU time is needed to transfer and write a single block of data.

A second approach is for computer X to pass a part of the file to computer Y and (without waiting for Y to finish writing the data to disk) begin transferring the next part. This second approach can result in a *much* faster transfer rate when there is a large communication delay between the two computers simply because computer X no longer has to wait for the "I have written the data to disk" reply from computer Y.

However, the problem with the second approach is that if computer X sends 100 blocks of data a second and computer Y can only write 50 blocks of data a second to the disk, we either end up losing data by just dumping it or we end up storing it all in memory until we can write it to disk and memory usage climbs dramatically!

So, in order to gain the performance of the second method with the stability of the first method, you need some way to send feedback to computer X that it is sending data to fast. This is the problem i've just encountered in MonoTorrent when i converted the disk writes to be fully asynchronous to avoid blocking the main downloading thread. When i seed off my local machine, i can't write fast enough so i end up storing 1000's of chunks of data in memory until i tell my seeding client to stop. How easy this will be to fix, i'm not sure. But i am sure it's doable ;)

Sunday, January 28, 2007

MonoTorrent underwent a lot of under the hood changes recently to bring things more in line with the bittorrent specs. I also finished off the implementation of the Fast Peers Extensions, so downloads with monotorrent should be faster when used in swarms that have other clients supporting the Fast Peers Extensions.

A majorly important fix went in today which prevents incoming connections which have just been created from being closed. This should improve things substantially and hopefully remove a long running bug of invalid AsyncRequests being passed into EndXXX methods.

Also, after talking with Ty Norton ( who is hosting, he told me that there have been 211.37 megabytes of downloads of monotorrent so far this month. Considering the zip file is a mere 154kB in size, that's over 1300 downloads! Not too bad. Now, if only i got bug reports from one tenth of those people, i'd be happy.

Or maybe MonoTorrent is so good it doesn't throw any exceptions... who knows ;)

Monday, January 22, 2007

MonoTorrent beta 2 is now out. It has a good few bug fixes, a good few extra features and hopefully is more stable than before.

Keep the bug reports coming in guys!

MonoTorrent home page

Sunday, January 14, 2007

I leave my house in an hour for the sunny italian slopes where hopefully there'll be enough snow for a weeks snowboarding. It's my last break before i go back to college. I'm looking forward to it, mostly because i'm beginning to get bored at home and haven't seen some of my friends in a while. The results for our christmas tests are also coming out then, which isn't really something to look forward to ;) Sure we'll see how it goes :p

Hopefully by the time i get back, stubbing on Mono.XNA will be complete. We're oh so close to reaching that magical 100% stubbed mark. After that, with a bit of planning and discussion, we'll be able to start hacking our way through the implementation phases in a more controlled manner as opposed to dropping code in all over the place.

MonoTorrent will start getting some much needed love and attention over the coming weeks (or so i plan). I have a few things planned which i can't wait to get cracking on.

Mono.NAT (the uPnP library) is also in pretty good shape. I received a patch a few days ago which fixed a lot fo FxCop warnings and updated the library to CLS compliance. So feel free to give that a shot for all your uPnP port mapping needs.

Finally, do keep track of the Mono.XNA status at and I won't be able to keep it updated while i'm away, but i did update it a few hours ago.

Friday, January 12, 2007

So Mono.XNA took a big (ish) step forward today with thanks to a big commit from RobLoach. Now, hopefully he'll write some NUnit tests to cover that code.

Take a look at the video linked at the end. It shows Mono.XNA successfully rendering an array of colours to the screen one after another in a loop. Render-licios. Next step, we want to be able to render multiple colours to the screen at the same time. I'm thinking a nice checkerboard pattern would be good. We'll see ;)

The video is encoded using x264. If you can't play it, VLC will play it. Just google "vlc" and download away.

Monday, January 08, 2007

I had a brain wave earlier regarding Mono.XNA while working on some code. I was wondering if we could come up with an easy way to figure out what methods need to be implemented in order to make game X run and i realised MoMa was perfect for the job!

After a quick talk with jonathon pobst, an svn update and a few mins hardcoding i generated the required files that MoMa needs in order to scan assemblies for me. A few mins later i had some stats ready.

Firstly, the simple example hits no methods that throw NotImplementedExceptions. That confused me, as i knew that the simple example doesn't run! So i started looking at some code. It turns out there are some classes where the methods weren't stubbed out with NotImplemented exceptions. So that's a little annoying. However it shouldn't take too long to fix that up.

Then i compiled the Pong example that robloach commited to SVN (thanks for including some makefiles/project files ;) ). I ran the Pong example through MoMa and i came up with the following list of methods that need to be implemented. Clicky Linky.

So, the plan is to get people working on the simple example until we can get it running. Then we work on Pong until it works. I'd guesstimate* that the Simple example will be running within a month.

*Guesstimates are accurate to +- 1 decade.

Thursday, January 04, 2007

It's been a bit quiet on the blog recently. Firstly, let me wish everyone a happy new year. I hope everyone had a safe new year celebration. I spent the last few days recovering from a bout of ye olde vomiting bug, so that kept me from doing too much work.

As far as XNA goes, Sandriman has updated a few classes to reflect the changes from the beta to the final release. The current status of the Mono.Xna library can be seen here. The status of the Xna.Game.dll library can be seen here.

As far as implementation goes, we have a target. The plan is to implement classes on a need-to-implement basis. Our first milestone is to get a basic pong clone working on Mono.XNA. More on that will follow as soon as i know more ;)

For now, i'm back off to bed to rest and recover.

Hit Counter