Thursday, April 16, 2015


Why Bother? 

Let's face it, if you want to understand modern 3D programming you need to understand shaders. That should be enough to entice even the laziest programmers, since 3D graphics is one of the sexiest types of programming. Actually, I assume that porn applications would be the sexiest, but I digress. Strangely, I've found many very good programmers struggle to understand the basics of shaders. Beyond that, it took me an incredible amount of time to finally grok the fairly simple concepts.

And they are fairly simple. The only problem is there are several; stacked on top of each other and if you screw up one tiny piece, nothing works. I mean anything from a blank screen to your items being pure white or strange technocolor nightmares which haunt your waking moments, making you question your very worth as a programmer. I'm assuming that's what you do since I have no experience in that level of frustration. Anyway, after you recover from the resulting psychotic break, you eventually find some clue as to what you did wrong and start the process over again. My purpose is to save you thousands in doctor's and lawyer's fees.

Why does it seem so hard to learn?

I think we've been teaching this whole idea backwards. This is a different approach which is closer to how I think about things. So, if you are ready to try ONE more time, I hope I can help you make sense of the insanity which are shaders.

The Beginning

Note, I use OpenGL. The concepts are about the same, but if there are differences you should be able to find them afterwards. Second, I'm not showing any real code. Implementation isn't actually that important since you will finally get so tired of the massive amount of code you will write your own libraries. Or if you are experienced and humble, use someone else's.

Let's start at the beginning: A pixel.

A single pixel on the screen. That's simple enough, right? Well, it's a color which we represent as three values, usually a floating point value from 0 to 1. The values are red, green and blue.
That's all there is to light. So (1.0, 1.0, 1.0) would be white. And (0.0, 0.0, 0.0) would be black, (0.0, 1.0, 0.0) green, etc. We call a list of numbers a *vector*, because we need to call it something and vectors sound cool.

Advanced programmers, please stay with me here. In the olden days (think the age of VGA), you would set the screen's resolution (how many wide and high) and then you would simply write to a memory location. So putting a pixel on the screen was as easy as writing to memory addresses. It's the same as putting values in your own variables, once you get used to it.

So that's how we used to do things. Your program, however, had to write to all the pixels on the screen. At least the ones that changed, but finding which ones changed and which didn't took longer to compute than just drawing the pixels. So you have your poor little processor, a single core back then, trying to push pixels and run the rest of your system including the rest of your graphics app.
In games, you have a lot more to do than just push pixels. Ideally you don't want the processor to do *any* rendering at all.

Moreover, you want a completely different processor to handle the drawing. A processor made exclusively for drawing stuff quickly. Actually, you really want a bunch of processors to do it for you. Better yet, you want a processor for each pixel! Or even more so you can queue them up for the next write.

So some smart guys created the graphics card. And it was good. But we programmers had to have a way to talk to the card. But there wasn't a standard way to communicate with the graphics cards, so you either wrote for each card separately or just used it as a glorified VGA card. Finally all the vendors (sort of) got their act together and figured out a general interface to use the massive power of these cards. So what did they change? Well, from the screen's perspective, nothing. It still got a list of pixels which had 3 values each.


Again, we will look at one pixel. Just pick a pixel on the screen. I like the upper left one, but it can be any pixel, like the one that's stuck a slightly different color, right next to your cursor. The one that drives you mad, like a splinter in the mind's eye. The one that would be fine if you could just stop looking at it! Come to think of it, I'll use that one.

That pixel has a secret. It's been hiding an entire graphics processor behind it. Every single one of your pixels has one, but this one is for it alone. This complete computer processor's only purpose is to spit 3 numbers to your pixel on the screen. Let me repeat that, every PIXEL HAS A PROCESSOR. And they changed the name from pixel to fragment. And since it mostly "shades" your screen (I don't get it either), this processor's program is called *fragment shader*.

So what's the point of that? You are just giving it a few colors already, right? Well you *can* do that.
But you can also do more complex things like sending it several different values and having your fragment's processor figure out the math. You're main processor could do it, but on an 800x600 it would need to do it 480000 times. And that's low resolution these days.

So good, we have a processor for each and they can do math. But the curious might wonder, "Wouldn't writing the data to each processor take almost as much time?" And they would be right!
Ideally we would like to describe a scene in simple terms, hand it over to the Graphics Processing Unit (GPU) and be done with it. So they also did that. So now we have a processor behind all your fragments which decodes a simplified description and turns it over to the thousands of fragments.
But what exactly do we send to this extra processor?


In fact, that's ALL you can draw. Everything you've seen in every game is a triangle. You might enter them as squares, which are two triangles. Or cubes, which are 12. Or spheres which can be any number of triangles. Or a dog, which is some huge number.

What? How can this be? They look smooth! What's sorcery is this! We'll get to that another time. Right now I need to focus on a single triangle. A single triangle is pretty cool. You can cover as much or as little of the screen as you want with one. And all you need is 3 vertexes or a vector with 3 values. In 3D you need three values for each, leaving us 9 values total for a triangle. That's way better than trying to blast the screen with possibly thousands of color values. In fact, if the fragment isn't inside the triangle then it doesn't bother sending it. We can just clear the screen and only draw what we need.

And now we need more processors, one for each corner (or vertex) of the triangle. And these processors, which hand values to the fragment shaders, are called *vertex shaders*. They only take vertexes and the hidden parts of the processor takes their output, creates triangles, and sends the triangle data to fragment shaders. So we just create a list of triangles and, if we like, lists of values to help the draw them, such each vertex's color. The vertex shader plays with those values, one of each vertex, and then uses the fragment shaders to draw the triangle itself. Pretty clever, huh?

So let's recap:

  1. A vertex shader only a single corner (vertex) of a triangle.
  2. A vertex shader takes several list's (or arrays) of data, puts it in a digital blender. (Or *transforms* it.)
  3. The GPU then takes 3 of these outputs and creates a triangle which it then sends to the appropriate fragment shaders. 
  4. Each fragment shader has it's own blender (Which also *transforms* the data)
  5. Finally, the fragment shader shoots the result to the screen.

That wasn't so bad was it? It's a big multiplexer. And don't bother looking to see if your GPU has enough cores to give each fragment its own, it doesn't matter. They are virtual, but it looks like there is one for each fragment and vertex.

And Now for Something Completely Different.

In OpenGL, the lower left corner is 0,0 and the upper right is 1,1. If I draw a triangle say from 0,0 to 0,0.5 to 0,0.5, I will end up with a triangle covering the bottom left hand side of the screen.
About a quarter of the screen will be covered. Which is good, if you are into that sort of thing. But unless you have a perfectly square screen, the left and bottom sides will be a different length. That's not a problem yet, but when you start rotating triangles it will be. It will distort them horribly.
When you show it to people they will look at it, tell you it looks bad and you should feel bad. People will laugh at you in the streets. You will lose all your friends and family. Soon you start binge drinking Red Bull and coffee. Before you know it you wake up in a trailer park in Minnesota surrounded by five very unhappy Norwegians.

So let's not do that.

How can we make our fragments square again? Well, we could multiply the short side by a larger number. Or divide the long side by the inverse of that number. That means my triangle will be transformed slightly. If we divide the long side we will have a triangle which is the same height and width. Now do we do that in the computer? Well, we could modify every vertex in main CPU before we send it to the graphics card. Or we can just give the GPU the list of changes we need and let the vertex shader transform it.

Let's do that! And we can even do more. I don't like the bottom corner being 0,0. I want 0,0 to be the center of the screen. That's also very simple, before we "square the fragments", we add 0.5 to each vertex. Now the triangle is shifted to the center of the screen.

And we can do more, we can rotate the vertexes, or multiply them to make the triangle bigger, or move it somewhere else. And that's a real pain to try to remember. You know what would be great? A simple way to describe these changes and pass them to the vertex shaders. Maybe as a bunch of equations? But that might get difficult to remember as well. Maybe we could apply some cleverness to the problem.

Well, I couldn't, but someone smarter than me figured out an elegant solution. They used grids of numbers in some really cool ways. 4x4 grids, to be precise. They are used to transform our vertex's location on the screen. In fact we call them transforms. They are a very special kind of transform called Linear Transforms. If you want to know about this secret sauce you can read up on Linear Algebra. For the sake of this tutorial, we will treat them like magic. I will describe what they do and you will just imagine that's what they do.

The first question is why did they pick 4x4 values? Well, that's the minimum we need to rotate, scale and translate (move) vertexes (and by extenuation triangles) in 3D space. Do I have your attention? This is how we move things around in 3D space. Remember, we can only rotate, scale, and translate objects.Also, I really hate the term translate because it's so close to transform so I'll just use move.

Now let's talk about math in general. As far as I can tell, mathematicians are trying to turn everything into high school algebra. Whenever they find something new they try to add it to something or multiply it with something. And much like different objects in software, different types of objects can be added or multiplied in different ways. A scalar (a normal number) can be multiplied against a row of numbers. Or a grid, it doesn't matter. The point is that each operation can either combine two things or, if we reverse the process, turn one thing into two. A quick example:

(4 * 5) + 2 = 20 + 2 = 22   // So by combining 2 things we can combine as many as we like.

44 = 11 * 4 = (7 + 4) * 4 = 28 + 16  = 44   // So one thing becomes several and then back again

This is called composition and decomposition. The important thing is that each of those representation are identical to a mathematician. They mean the same thing.
To us, some are more of a pain. With what we are doing, however, we can compose the previous operations (moving, scaling, rotating) in a series by multiplying matrix transforms. Then we can multiply our vertex (which is a vector) against it and get our new location. This is the majority of what a vertex shader does. It takes a list of these matrices and multiplies them with the vertex it's working with. And this little bit of magic can be explained somewhere else.

Mommy, Where Do Shaders Come From?

Well, shaders are programs. They take input and produce output. The only unique thing about them is they usually talk to multiple outputs. When you draw something, you select both a vertex and fragment shader. Only one of each. So if you draw a person as a single object then you have to make shaders which handle all the colors, bumpiness, shininess, etc which you need. Then you select your next object and do it again. You do this until your image is done and then you tell the GPU to print it to the screen. You might have dozens of shaders in a program, although that's rare.

Since shaders are programs, the people that invented the idea of shaders decided to make them like other programs. What kind of programs? C or C++ programs. And you have to do it without your fancy graphical interfaces (IDEs). So you need to know about the general idea of compiling code.

To start with, a compiler is a program that inputs source code and outputs executable code, or your end program. But, in order to make things simple it is broken up into pieces. There is a high level compiler, which turns your code into an object file, which is called compiling. The object file contains the machine code but not the library references, entry points or even your other object files which it will need when it runs. This is done on each component, usually a single file. Then these object files are combined into a program which is called linking. Only then can we run our program.

So we have the process: compiling, linking and running. Shaders do the same thing. First you compile your vertex shader. Then you compile you fragment shader. Then you link them together into a program. Then you save it until you need it. Finally, when you need it you run it.

So we take some text, usually a string, which details the steps for the vertex shader and send it to your video driver's shader compiler and get our first shader. We do the same with our fragment shader. With vertex and fragment shaders in hand, we stick them into a shader program. When we are ready to use the program, we enable it.

And last of all, much like JavaScript, the designers tried to make a C based language. So really you are just writing specialized C style code with a few special variables to communicate with the next thing in the pipe. And one last piece of information, each shader can be passed two kinds of data: global values and per-item values. Let's use color as an example. If we set the color as a global value, then the whole thing will be one color. If we use per-item values, then we can have a different color at every vertex or even every fragment. And we can define and modify these inside our own shaders! Maybe later I'll explain how OpenGL does this.

And that's the short of it. As you can see, it's a fairly large subject but this is only meant to give an overview so you know what questions to ask. So in the end, it's just a pipe from your CPU, which goes to several vertex shaders which in turn go to several fragment shaders which finally shoot colors to your screen, and into you. And oddly, all that's been going on if you are reading this on a computer screen. Or as Lao Tzu says, pipes within pipes; the gateway to all understanding about shaders.

Friday, December 6, 2013

How Using Puzzles Can Estimate Software Time and Effort

I've worked on several project so far in my career and in every one, I've been asked a seemingly simple question:

"How long will this take?"

Anyone who has answered that question knows how hard it is to estimate software development time. There are many factors in this, not knowing how fast other developers are, not understanding the scope of the problem, etc. So far every estimate I've made (until recently) has been wrong by a factor of 25% to 50%. Some under, but most over. Sometimes there were bad assumptions. Other times there was bad luck. Still other times the hardware wasn't available to develop immediately.

So how can developers predict how long code takes to make? Well first we need to create a measurement for code development. Many different attempts have been made to try and quantify code production and productivity. A few examples are the man-month, lines of code (LOC), defects per LOC, number of files, number of checkins, number of bugs fixed, and even requirements tracing. Nothing seems to work. The best metric, although it leaves much to be desired, is lines of code (LOC). Unfortunately, LOC differ in every language. Paradoxically the number of bugs seems to be tied to LOC. Microsoft has clocked roughly 10-20 *released* defects per 1000 lines of code. So should we write everything as a Perl one liner? Obviously not, but there is still a correlation and the implication is clear: smaller is better.

I'll take this even further; comments count as lines of code. They must be maintained. They must be correct. If they are wrong then serious errors occur when someone tries to change the related code. In fact, if you read a book carefully enough I would guess there are 10-20 errors (typos, grammar problems, typographical blunders, etc) per 1000 lines of text. What makes one author have half the errors of another? Experience, but usually their prose is simpler. Yes, Strunk and White's Elements of Styles is a programming manual. Sort of. Grammar is not an issue with code because, the vast majority of the time, typos are caught by a perfect proofreader: the computer.

Anyway as the comedian says, I told you that so I can tell you this: These problems are related. What's more is they are the same problem. Why can't we just type out the same old code and learn to do it perfectly? The reason is simple, because we are doing something new. What we are measuring is the number of new things a programmer has to do per lines of code! This also explains why some programmers are faster than others and also why bugs are invariant over LOC. What a programmer understands, he can do well. Typos are not usually an issue unless there are many similar names that is almost a bug in itself.

Well, what is one of the "new things"? That's a terrible name so I'm going to call them puzzles. Puzzles are things that take time to solve and also allow for a chance of failure. Puzzles can be solved in one of two ways, either the developer discovering or researching the solution. Solutions also have components to them: An amount of code, development time, failure rate, and head-space. Head-space is something the developer has to keep track of during development. When you ask something about the software and he lists caveats on its use, each of those are taking energy to maintain inside him. Looking at code through this lens hints that the variation of bugs per 1000 LOC between developers and languages could either be the language or the developer bypassing puzzles. One is design and experience and the other is the power of the language.

So far this has all been idle supposition, but here it becomes practical. We can use this to find "Power Points" in sections of code. Every programmer should intuitively understand what these points are. If you have ever seen a comment saying "This is where the real work is done" or "This is magic" then the next few lines are likely a power point. The general flow of code is usually getting some data, modifying it and then sending it somewhere. Power points can happen in any of these but I find most in the modifying or sending portions. Usually when a power point is in the sending phase it means it is writing to a tricky interface. Most, however, happen when modifying data. With this information we can identify the difficult parts of the code and implement strategies for verifying them. So we identified the tough bits of code, so what? We could ask the developer and he could have told me the same information. Well, this is also a tool for time and defect estimations and also a metric for how good the design is.

First we can estimate how frequent power points are by using .01-.02 Defects/LOC. That means there is a defect, on average, around every 5 to 10 lines of code! That also means there is likely a power point *every 5 to 10 lines of code!* That also means every single solution is wrong. That seem absurdly high but defects congregate around specific points. Also, many bugs tend to be a failure of the design to handle a specific scenario so these are fixed by restructuring rather than fixing code. Let's say half of the end product defects are code/puzzle errors. I've yet to do the probability curves, but I expect there are 2 to 4 Puzzles per 1000LOC with one error each. There is a lot of speculation here, but the end result is simple, *programmer's never solve a puzzle and implement it correctly for all use cases.*

This seems bleak. How can software run at all? Simple, the program works most of the time. Only in special places, usually where the developer didn't think about a minute consequence or could not imagine every possible use of his solution. Where does that leave software development? We can't estimate effort or time. We can't estimate design complexity and we can't even implement the product without many, many errors. Is there any hope of creating reliable software on time and in budget? Is there any way to know how many developers we need before we need them? Is there a way to design software sensibly yet still allow for changes for requirements and design blunders? Well, yes actually. But we must embrace the uncertainty.

What is estimating? It's taking input from various sources (requirements, employees, vendors, past projects...), modifying it using any number of ideas, algorithms, formulas, and voodoo and finally formatting it and sending it out to be considered. That sounds familiar. It's the typical flow of code! Furthermore, managing the project and designing the product are also similar tasks. One way to look at this is that missing schedule is a bug in the planning or that a design defect is a bug in designing. This is confusing but it's the best possible outcome. This means we can use the same methodology to plan and manage a project that we use to implement it. Programming is all about finding patterns, right? Well here it is. You can write a project like you write a program. The downside is the same rules apply. My rule is if there is an anti-pattern for implementing software, there is a corresponding (if not the same) anti-pattern for estimating, planning, designing, and maintaining projects. A "ball of mud" can happen in project planning as well as implementation. In fact, projects are fractal in nature, meaning an anti-pattern can happen between any interacting subsystem. They can happen at the company's program level (the project of projects) and the interpersonal level.

This means we can use the same methodology to solve many of our problems, not just software. In fact, since it is the same process and is simple enough for everyone to understand, it can be the basis for all your processes. I have a lot more to cover, but this post has become very long and you likely need a break. I'll cover the actual process in the next post.

Wednesday, January 9, 2013

How to Write Software

I've been reading a few articles from professional programmers which I strongly disagree with.
I've heard that static types should be mandatory or there should only be one way to do things. This really bothers me as younger programmers read this and believe it. Still, I read a long time ago (COMPUTE! magazine, I think) that knowing the language is only 10% of being a programmer. This is mostly true, as once you know what you want to do, then getting there is just a matter of time. Unless the language forbids it, in which case either find another job or a scapegoat for your project. Anyway, I wanted to lay down a few general guidelines for programming in the hopes someone doesn't waste as much time finding them as I did.


  • First, there are two parts of programming. Design and implementation. Often these get squished together so create libraries of tools to quickly cobble together similar applications. Then pick the best one.
  • Don't start a project without a clear end goal. If you don't know where you going, then you will never get there.
  • The software design is a project unto itself, which should have the same things every project should have: time, money, personal and respect.
  • Software design is not about finding the best way to do X, it's about finding if X is even possible with Y constraints. The code at this point is only experimental, but much of it will find its way into production. Only you won't recognize it as it will have so much error checking.
  • You won't know how much time it will take until you are halfway done with the project.
  • Once you commit to a design the customer will change the requirements.
  • Your code is going to break, factor in tools to help fix it.


  • Communication between project members takes more time and costs more money than anything else, this is why solid communications is important.
  • Communication between project members should be done in documents, specifications, etc. There is no better feeling than fielding a question by referring them to page X, paragraph Y in document Z... Especially if they should have already read it.
  • Good communication is short and should be more graphical than it usually is. I don't care if it's a napkin, as long as it's readily available to others.  
  • I've heard somewhere that any specification longer than a page is worthless. I have to agree.
  • Users will obsess on the GUI. Get over it. Even if you wrote the code which will bring peace to the Middle East, the users will never see it and bicker over where a button should go. Use this to keep meddling stakeholders out of your business.
  • Don't rely on email, if you can walk over to the other person's desk. It gives you facetime and allows you to practice your people skills.
  • Use pictures wherever possible.


  • When structuring you project, try to create small atomic pieces which can be loosely connected. You are not creating a single solution, since you don't know when and how your project will change. Create a toolkit to solve similar problems. You'll be surprised how often you'll reuse them.
  • When planning your project, write out every bit of functionality in plain English, such as "Turns the rotor 45 degrees." Then break these down into similar statements until you get something atomic like system calls. Then do a unique sort on every sentence.This will help you reuse your code in various places by finding similar functionality. Then try to get all the nouns and verbs together to form your project dictionary. The nouns will be your data structures and your verbs will be the functions.
  • Don't worry about object orienting your code, if it happens naturally then fine but don't try to force a structure on the problem; the problem should force the solution.
  • Break the fewest number of similar pieces. This might happen at a lower level than you expect, don't worry it's the best thing that can happen. Assume this structure and develop the design, again.
  • Always add a Read Eval Print Loop (REPL) so you can inspect your program from the inside. This usually turns into your testing harness, just save all your little tests.
  • Don't reinvent the wheel. Which would you rather have, a perfect bike tire or a wobbly stone circle?
  • Don't be afraid of using Unix tools to solve the problem, they solve a lot of problems.
  • You should know Unix. I didn't understand programming until I learned Unix.
  • You should know Lisp. Every large program is just a primitive, degenerate form of Lisp.  
  •   Keep your GUI code separate from code which does something. This way you can make command line apps to test the important parts of your code.
  • Use Lint, or something like it.
  • Always start programming with a “Hello World” programming. Then add small bits and pieces of code and compile and test. When something breaks, you know immediately, you know where, and you remember what it should be doing. Generally I create a function definition and return a (constant) value, compile and run. Add a call to the new function. Compile and run. Then I add some local variables. Compile and run.


  • Don't code over 6 hours, you can't read technical documentation for 4 hours straight why do expect to write it (which is harder) for so long.
  • Writing code is writing. Read your code out-loud and if it doesn't sound like something people could understand rewrite, rename or comment.
  • Comments are lines of code too, and they must be maintained.
  • Group lines of code together into paragraphs when they all are for a single concept, such as making a system call or averaging a list.
  • Line up sequences of similar lines so the equals coincide. This makes it easier to read and mentally group.
  • Use an editor which auto-indents and on the subject of editors, master your editor and your keyboard. Would you trust a musician who played the piano with two fingers?
  • Use the dumbest thing that will work. You may be able to implement this with monadic hyper-widgets, but no one else will understand it. Besides, if it works it isn't dumb.
  • Files should act like chapters in a book, everything related to X part of the story goes together.

There. Learn that, and as Conan says, and you can beat anybody. Notice none of this is language specific. There are some points which might be difficult with some languages. And come back and read this again when you can, I'm sure I'll updated it as I think of other habits I have.

Tuesday, December 25, 2012

Merry Christmas!

As I look back on the last year, I can't help but think of it as a wonderful gift. I've had a very good year, and although there were some difficult times I am grateful to have my family; together and healthy.

I'm still working on CLinch and Qix, which is my present to myself. In the next year I hope to share these creations with others and to make the world a little better, at least for those writing software.

Thank you all for your support which has been invaluable. I look forward to sharing the new year with you all. 


Thursday, December 20, 2012

Qix is Back... and introducing CLinch

Well, Qix is back in production. It's been a while so let me bring everyone up to speed. I worked on Qix, and I got some things working. Then I found out about shaders and ended up taking some time to learn how to use them. Then I started to write a game and got as far as porting some of the Bullet Engine and performing some movement tests. Then there were some family issues.

One day I decided to go to the Lambda Lounge Indianapolis Meeting. I made mention of Qix and they asked for a presentation. I created a quick presentation application in lisp using some of the code I had written. That's when I found horde3d. Horde is very well organized, and while it still has some warts in my opinion, I quickly built the program here.

This, however, had whet my apatite. Using Horde as my example I cranked out a new library in a few weeks. Then more family issues and XCOM was released. Finally, a couple of weeks ago I decided I was going to finish enough to release on github. I asked #lispgames for a good name and I ended up choosing CLinch. CLinch is now available to use.

So then I took CLinch and ported my presentation from cl-horde and made a youtube presentation of my original. I have been completely astounded by the response. CLinch is still 0.1 presently, but is quickly approaching 0.2 which is focused on stability and finalizing features. Version 0.3 should be focused on performance and usability. These are just rough guidelines, not hard rules. I have 3D asset importing targeted for 0.3 even though it is a new feature. So CLinch is moving along.

I decided to clean out my Qix repository to start from scratch. I intend to use CLinch as the graphic engine, so work on CLinch is likely work on Qix. I expect to start working on the internals of Qix when CLinch reaches version 0.2. I've posted a rough list of tasks I need in something of an order, in the Qix repository. I hope to get a skeleton project up this weekend. I need to start reading "The Craft of Text Editing" by Craig Finseth to get some idea how to implement Qix's text editor. Until then, my philosophy will be to "put something on screen" rather than design something in my head. This is how CLinch happened and I feel that Qix should evolve rather than be designed. We can always refactor. I've posted on reddit and received some very good feedback. One idea is to modify an implementation of lisp to make Qix simpler and faster. Another is to make Qix an open standard. Both are really good ideas, but I'm not far enough along to discuss either intelligently yet.

So that's where Qix and CLinch stand. I'll continue to close out issues in CLinch as we are more than half way to the next release. Also, I have a Qix reddit for development updates and discussions. Thank you for your support and I look forward to working with you all!

Thursday, August 5, 2010

Why isn't Qix done yet?

It might seem odd that I can do charts and pictures but have trouble writing code. The problem is that I am not home most of the time and I only have access to email and the web. I can make a quick graphic or respond to email, but working on code away from home is near impossible. On a good night I get two hours of time just before bed, so I hack what I can and I think about things the next day. Sometimes I feel inspired and get a lot done in a little bit of time. Most of the time I get a little bit done. This is why I haven't bothered with any game development contests. This is a hobby for me, my real job is as a father and a husband.

And then there is the issue of internet access. We disconnected TV, phone, and internet earlier this year and have never been happier. Television blights my soul by strengthen my defects and loosening my pockets. The downside is there is only one internet provider in my area and it would cost upwards of $200 to get it. So I borrow a cup of internet from my neighbor. Some nights the wifi connection is bad and I don't get connected. I can still code however.

I really want to see Qix brought to life, but I don't want people to think that I expect to post a project and have others work on it while I reap the rewards. If someone has the time and shares the vision, please take it and make it. I don't mind. I would give almost anything to see something like Qix created. Well, that's not true: I'm willing and able to give a couple of hours at night and a little time besides. I understand if this makes things move slowly, but I have a little boy who deserves my time. My father had to work two jobs to make a living when I was his age, and as a consequence, I feel like we never had a close relationship. If it becomes necessary to work two jobs, then I will, but until then I want to be a better father than programmer.

But I'll still work on Qix.

Monday, August 2, 2010

Events for Lazy Lisp Programmers...

Events are an important part of my Qix idea. But most event systems require quite a bit of code. In object oriented designs, an event handling class is created and you can inherit other classes from it. Which is fine, if you want to hard-code event types and only call designated objects. But Lisp always seems to hint at a more efficient way of doing things. With HTML and Javascript as my examples I decided to try something different. I started with a hash:

(setf *handler-hash* (make-hash-table :test 'eq))

Then I made a couple of simple functions:

(defun register-handler (obj handler)
(setf (gethash obj *handler-hash*)
(append (gethash obj *handler-hash*) (list handler))))

(defun fire (object &rest e)
(let ((handlers (gethash object *handler-hash*)))
for i in handlers
do (setf e (apply i e))
while e)))

Now, with these two functions I can create events for any object. As long as the (eq... test says they are the same. So lets take *standard-input*:

(register-handler *standard-input* (lambda (&rest x)
(format t "I was passed: ~A~%" x)

(register-handler *standard-input* (lambda (&rest x)
(format t "I was passed: ~A~%" x)

(register-handler *standard-input* (lambda (&rest x)
(format t "I was passed: ~A~%" x)

(register-handler *standard-input* (lambda (&rest x)
(format t "I was passed: ~A~%" x)
'(anyone still here?)))

And then fire an event with (fire *standard-input* 'hello)...

I was passed: (HELLO)
I was passed: (WORLD)
I was passed: (GOODBYE)

You get the idea. Here, events can modify the following events, and even stop the event chain altogether. Of course there are many other considerations such as unregistering events and such, but this is still a lot of functionality in two small functions and a hash table!