The Software Purist |

TAG | Programming

Dec/09

16

Characteristics of Software Company Size

I find that it’s something interesting in software companies, that often, depending on the company size, things work very differently. Of course, this can be true for any profession, but for software, it seems to be at a larger scale than other areas of development. In this post, I will go over some general categories and then my experiences and perception for each.  I will also put a general figure for each, as far as size,  which is the size of the company, not specifically the development team. This will go over what I feel is generally more of a typical company of that nature. E.g.: For a 100 employee software company, generally at least 10-20+ developers. If it falls way outside the range, it’s an atypical case, that I won’t be covering here.

Very Small/Tiny (5-15 employees)

Very small companies are generally run as your typical start-up style. That means that process is commonly limited or non-existent. Software tools are kept to only what is absolutely required and what is free. Teams are kept small, as is the company. Communication is widespread, e-mails will often go to the entire company, because there’s little point in excluding the one person who it doesn’t apply to. The group is often close-knit and may even have lunch together every day. Management is thin or non-existent. With these strategies, most overheads are eliminated, and because the team is so small, it can be justified and gotten away with. Software developers are often working on a large piece or the entire product by themselves. Getting code done fast and to market is often more important than designing it well, which may come at the cost of having to rewrite it in a few years, if the company grows, but it’s considered worth the risk

Small (16-80 employees)

Small companies borrow a lot of characteristics from their very small predecessor. Most small companies started out with years under their belt as a very small. This has given the company both good experience and some responsibility to rethink what worked in the past. Some of what worked in the past worked because they were so small and the scale of their product was smaller, so they got away with doing it quickly and without following best practices. In a small company, they will review some of what’s been done and potentially rewrite some pieces with future goals in mind. In small companies, e-mails start to become refined so they won’t go to everyone; the volume would be too high, as to where e-mails would just start getting ignored anyway. Software process goes from none to minimal. A process is generally put in place where management is there, but their job is mostly revolved around staying on top of development, and in some companies, interfacing with the customer. Usage of process is done, but usually not “by the book”, so to speak. Development generally takes a bit longer than very smalls, but the software is generally written a bit more robustly, as company goals become more defined.

Medium (81-300)

Medium-sized companies generally stem from small companies who have grown. Depending on what phase the company is, because there’s a wide-gap in there, it can exibit characters closer to a small company or closer to a large company. Process generally becomes defined better, and they start to try to follow software processes. Managament gains the additional responsibility of enforcing process. Rearchitecture general does not put the company at financial risk. E-mails are sent purely to those who “need to know”. Interaction starts to occur through layers; a software developer generally wouldn’t interact directly with a CEO.

Large (301+)

Large companies are characterized by, a generally, lengthy existence. They have tried many methods and have an idea what has worked for them and what has not. Process is well-defined, and there are a lot of layers of overhead, but these are often required to ensure successful management. Operating processes are generally less efficient, but in the end, they generally put out better tested products (not always). E-mails to the entire company are frowned upon, unless you’re a human resources, or have a title such as VP. Development cycles are generally longer and the company has a long-term roadmap, going out 5 years or more. Many employees will never meet the CEO or president. Developers will generally be specialists. Some developers will specialize in  a GUI technology, another will specialize in a specific-back end technology. They might work with this one technology for years.

Conclusion

So, that’s my overview. There are a few points I want to note for technical accuracy. There are well-defined sizes for companies defined by legal terms, that I specifically avoided adhearing to: This discussion is from my perception. Additionally, as we all should be aware, every company is different. Which means, that some of what I’ve said may not apply to your company. If so, don’t take offense: I don’t work at your company… unless I do, in which case, you should tell me who you are. 😉 Finally, some of these also vary by industry. A large gaming company is likely to be very different than a large defense contractor. I hope you enjoyed the post, and of course, I look forward to your feedback.

· · · · · · · ·

Dec/09

14

Scrum: Agility and Practicality?

In this installment, I wanted to discuss the Scrum software process. I have worked on some Scrum projects and I intend to go through some of my experiences in the following post. Scrum is often referred to, as simply, Agile. However, calling Agile a software process is a bit of a misnomer. Agile is a classification of software processes that share a few common traits: Iterative styles of development and a more fluid ability to react to a change in requirements, strategy or customer needs. Answers.com defines the word agile as being the following: “1. Characterized by quickness, lightness, and ease of movement; nimble. 2. Mentally quick or alert: an agile mind.” So, this flows with the general spirit of Agile. With this in mind, I can share some of my experiences, particularly with the most recent large project I worked on that was using the Scrum methodology.

The first thing to note when discussing this project is that I have worked on other Agile projects before. Generally speaking, however, prior to this one, saying we were doing Agile was more of a catch phrase and Agile wasn’t really being done. Having been trained in Agile and having done my research, in these cases, I was well aware that we weren’t doing Agile, but I believe I was in the minority. Anyway, coming back from that tangent, in this most recent project, the group did make an effort to follow Agile, as it was being dictated by a particular customer. More specifically, we were using Scrum, although most of the way through the process, most of the group was unaware that Agile and Scrum aren’t simply synonyms. An interesting blog article discussing this common problem can be found here.

Anyway, my experience with it wasn’t really a positive one, although I don’t necessarily think I can fault the Scrum process itself. The idea of a daily morning scrum with a small group of people seems like a logical idea. In our case, it wasn’t so effective, because our group was as high as 16 people, including project managers, technical managers, developers and QA, in two different offices. For the overseas office, the meetings were in the afternoon for them. Because of the high quantity of personnel, the meetings took 30 minutes – 1 hour instead of the recommended 10 minutes or thereabouts. In addition, the meetings degraded quickly, with developers discussing specific issues instead of keeping things at a high level, following the general rule of three: 1) What I worked on yesterday, 2) What I will work on today, 3) Blocking issues. Attempts to redirect it back to these three 1-3 sentence overviews weren’t always appreciated.

Again, this isn’t condemnation of Scrum specifically, as I think it had a lot to do with the structure of the meeting. That being said, I think daily scrums can degrade pretty rapidly, and it’s especially at risk, the larger the group. In addition, we typically started in the 9-9:30 AM range, however, the meeting was often at 10 or 10:30 AM. I find I have very good concentration in the morning and breaking this routine had an impact on my morning workflow. The increased communication is important, but I question whether it would’ve been better to end the day with this meeting. I ran into this same problem at my previous company, when we worked with Scrum: The meetings would last too long. In that case, the group was much smaller, but again, developers are developers, and sometimes the moment they think of an issue or interesting concern, they want to discuss it right there, even if it’s not the appropriate forum.

The second critical piece of the Scrum methodology is sprint length. Scrum recommends one month sprints. In the most recent project, sprints were generally two or three weeks at various points during the process, which I felt was too short. The problem with the reduced scope of this sprint length is I find that there’s too quick a push to show results. Sometimes when designing a complicated and major feature that integrates with multiple different system components, this can be too short. Sprints this short require too often a time to leave the system in a stable state to display to the customer and to management. In general, I recommend if you decide to do Scrum that you stick to the suggested one month sprint length.

Some other important artifacts of the Scrum process are the product backlog, user stories and burn down chart. The product backlog makes a lot of sense; unfortunately in our process, the product backlog was not used by upper management. I believe this is due to it being the first time the company had really tried to use Scrum. I think things would’ve flowed a lot better had the product backlog been used effectively. User stories are a tool which seems to be the Scrum replacement for Waterfall-style requirements. The idea of user stories makes sense, but I found that they were very time consuming to write. From here, the user stories derived into the sprint backlog tasks, which were intended to be short tasks between 4 and 16 hours. The final piece is the burn down, which basically gives a simple representation to figure out if you’re going to hit your target for the sprint. It will give you an idea if you’re nearing the danger zone.

One problem I frequently encountered during this sprint backlog tasks is for the tasks, the development teams often over-committed. It’s very easy to look at a small feature and say it will take only four hours, but the four hour estimate was often a best case estimate, without taking any consideration for integration, testing and bug fixing. So, it would generally follow that the team was working 60-70 hour weeks to meet the goals, due to the mis-estimates. Once hitting the mis-estimates became commonplace by upper management, they pushed for the same amount of committing with all sprints, so the development team had dug themselves into a hole, due to the poor estimating.

A critical piece of the scrum process is the sprint planning meeting. During the sprint planning meeting, the goal is to plan the tasks that will be done for the sprint and decide on who will do them. However, part of this process is that the sprint meeting will take no more than 8 hours and that the team will sign up for tasks they will do. However, this frequently was not the process. Mostly, it was a few senior developers and management deciding on who would work on the tasks. As you might have guessed, I was among the deciding group, although I objected to not including everyone. Furthermore, because not everyone was included in the meeting, the meetings often ran way over; I recall one that started in the morning and basically ended about 15 hours later. I generally recommend that there needs to be prepared prior to this meeting and the tasks should flow from the product backlog. Everyone should come into the meeting prepared and with some goals in mind. If people come unprepared, the meeting will drag. If not all the right people are involved, again, the meeting will drag. Unfortunately, no matter what the outcome is, you’re almost guaranteed to lose a full day of development every sprint, just deciding what tasks will be worked on.

The final piece I’d like to mention is that during a sprint, Scrum dictates that tasks shouldn’t be changed. Management cannot dictate priorities for the current sprint. They can change requirements for future sprints, but not the current sprint. In our case, this advice was not heeded. Both management and the customer would frequently change direction during the sprint. This resulted in a lot of rework and a lot of late nights adjusting to the new requirements. I have a feeling that this is commonplace during Scrum, which is part of my distaste for the process itself.

Ultimately, I think that Scrum makes a lot of sense on paper. Having gone through multiple projects which have made some use of Scrum, including one that made heavy use of Scrum, I have not seen any of them work effectively. In a way, Scrum promotes poor development practices, because the focus on rapid change somewhat discourages the practice of thinking a design and architecture through, which can lead to premature gratification that ultimately makes a more difficult to maintain design later. I’m definitely no fan of a strict Waterfall process, but I am confident that Scrum, in the process of fixing some of the flaws of Waterfall, has created some new ones in its place. I do think that education in the usage of any software process, whether it’s Scrum, XP, RUP, CMMI, etc… is essential to things working smoothly. The fact is that when faced with deadlines and pressure, people will revert to their most familiar behavior, some of which does not play with a process like Scrum. On a side note, a few developers I’ve worked with in the past have been very fond of the XP process. I have some doubts about XP, as well, but I can say that when I have used it, I have seen some benefits.

· · · · · · · · · · · · · · · ·

Dec/09

6

Programming Paradigms

If you’re an experienced programmer, this probably won’t be new information, but I hope to at least present it in a new way.  When developing software programs, there’s different ways to think about the problems you’re trying to solve, which affect the entire process from initial design to how it’s coded, even to how it’s tested.  I discuss a few of these in this article.

Unstructured

These days, unstructured styles of programming are generally frowned upon. In the old days, you might have programmed unstructured in older dialects of Fortran, COBOL, Basic, etc…, and used GOTO to move between sections of code. All variables were global to the program and so you had to be very careful about the usage and naming. This type of code was simple to code at first, but very difficult to read. In addition, it didn’t scale well at all. As programs got larger, it became exponentially more difficult to maintain. There isn’t much to talk about here, because coding in this manner is rare nowadays, except in specialized fields. You can imagine a program looking very sequential, though. Something like this:

if my height is less than 4'
goto short
elif my height is greater than 6'
goto tall
else
goto normal

short:
print "You are too short to go on this ride"
tall:
print "You must wear a helmet."
... offer helmet ...
if accept
goto getonride
else
print "No dice..."
normal:
goto getonride

getonride:
print "Welcome onto the ride."

exit:
print "You must leave.  NOW!"

As you can see, this can quickly get out of control. Reuse was almost non-existent.

Procedural Programming/Imperative Programming

Procedural programming was the first type of structured programming and it started to become widespread in the late 60s and early 70s.  It was probably the first major step towards programming we do today.  Structured programming is still used quite a bit and is the basis for some of the later programming paradigms.  Structural programming is responsible for mostly eliminating the widespread use of GOTO.  This methodology was more commonly taught at the time I was in college, as Object-Oriented programming was newer at the time, and not very well understood outside of a smaller community.  The main concepts behind is that any task can be broken up into sub-tasks.  Emphasis is placed on functionality and data structures.  With this, it is became easy to break down a workflow with direct relationships and traceability for a functional specification, often, provided by a customer.  This directly can be derived into software functional requirements, and then directly derived into software code, and then directly derives into tests.  Because all of the emphasis is on functionality, and the code is structured in that manner, it provides a lower barrier to entry than newer techniques such as Object-Oriented Programming. Due to this, the common tool for figuring out where a piece of code is implemented is simply a matter of using grep (on Linux/Unix) or find (on Windows).  Data definitions can be provided by systems engineers, because once the functionality is defined, the data required is also easily derived.

Some of the common procedural-oriented programming languages are Ada-83, Algol, Pascal and C.  Of course, at different points, many of the procedural programming languages later gained Object-Oriented features with new revisions (Ada-95, C++, etc…).  One of the main problem with structural programming concerns reuse.  You can successfully meet functional requirements, but later notice that different components of the system have 95% similar functionality.  It becomes difficult to directly express these relationships in your code.  To try to handle reuse of sections of code where there can be different types used, you wind up with large if and/or switch statements.  The problem then becomes that for each new type that supports this relationship, you wind up modifying working code, which is always risky.  Modifying working code makes it difficult to supply a working library because elements in the library will often be changed.

As an example, let’s take the case of a vehicle and then provide multiple types of vehicles, a car and a bus. Example in C:

void drive(int type, void* obj)
{
	switch (type)
	{
		case CAR:
		{
			Car* car = (Car*)obj;
			// ... Logic to accelerate car
		} break;
		case BUS:
		{
			Bus* bus = (Bus*)obj;
			// ... Logic to accelerate bus
		} break;
		default:
		{
		} break;
	}
}

Later, we provide a boat:

void drive(int type, void* obj)
{
	switch (type)
	{
		case CAR:
		{
			Car* car = (Car*)obj;
			// ... Logic to accelerate car
		} break;
		case BUS:
		{
			Bus* bus = (Bus*)obj;
			// ... Logic to accelerate bus
		} break;
		case BOAT:
		{
			Boat* boat = (Boat*)obj;
			// ... Logic to accelerate boat
		} break;
		default:
		{
		} break;
	}
}

As you can probably see, it’s relatively easy to figure out where to insert the code, but the maintenance of this can increase quickly, if you take into account that each function, such as drive, park, accelerate, addFuel, etc… would each need this sort of switch statement. You would wind up changing a lot of working code.

Object-Oriented Programming

Object-Oriented programming could be considered the next phase in the evolution of programming languages.  It largely gained popularity due to C++ (formerly C with Classes).  Object-Oriented development changes the emphasis.  The emphasis in Object-Oriented programming is not with defining the functionality of the system and the data.  Instead of putting the emphasis on the data of the system, you start out by identifying the objects in the system.  So, imagine a game, such as the original Super Mario Bros.  You could identify objects such as your main character (Mario, Luigi), the enemies in the world (Goombas, Koopa Troopas, Bowser, etc…), the blocks, the pipes, moving platforms, and even the world itself.  The functionality is tied in when the objects communicate with each other. In technical terms, this communication is called messaging. The functions are owned by objects, and are called methods, instead of functions. This ownership is based on something being able to do something else.  For example, Mario can jump, so Mario might have a method called jump().  Mario can also shoot fireballs, so he would have a method called shoot().  Since Mario and Luigi are the same, they might simply be two separate object instances of the same class called Player.  The enemies have some similarities, so they could be structured with a base class called Enemy and derived classes, which implement the different functionality.  It’s a different way of thinking about things.

Now what I’ve described so far might not make sense if you’re not proficient with Object-Oriented programming, so let me go back to the Vehicle example.  Here’s what it would look like in OO-terms:

class Vehicle
{
public:
	virtual void drive() = 0;
};

class Car : public Vehicle
{
public:
	virtual void drive()
	{
		// ... Logic to accelerate car
	}
};

class Bus : public Vehicle
{
public:
	virtual void drive()
	{
		// ... Logic to accelerate bus
	}
};

class Boat : public Vehicle
{
public:
	virtual void drive()
	{
		// ... Logic to accelerate boat
	}
};

In it’s most simple form, OO is simply a reorganization of code. However, it is obviously much more than this and this is a very simple example, which doesn’t touch all of the depths of how far things can go, but I think is fine to start. When you say that a Car is derived from a Vehicle, you are effectively saying that a Car is-a Vehicle. This is the basis for this type of inheritance. You should only derive if you can logically say that something is something else. For example, you shouldn’t have Bus derive from Boat, because a bus is-not-a boat.

So, if you look at the above example, I think you can see how this flows really nicely for things like GUIs. That’s why you can have a framework where every visual element might be derived from Control, Widget, or even Window (Side note: except in Actionscript, where for legacy reasons, everything is nonsensically derived from MovieClip). There’s logic behind this. A button is-a control. A push button is-a type of button. A list box is-a type of widget or control. And so on… It gets a little trickier when the base class is called Window (or CWnd in MFC), but if using this type of framework, you can try to accept the notion that each control could be considered a window, even though there were better name choices.

Object-oriented suffers from a set of problems of its own, even though it is an improvement over procedural programming. The first is that there’s more typing, at least upfront. As developers, we try to reduce typing, but OO can often be more verbose than necessary. Of course, wizards and newer programming languages aim to reduce this overhead more than languages like C++ or Java, which can often be overly verbose. The OO theory, though, is that through reuse, you avoid much of this typing as you develop higher-level things because you’re basically picking from a toolbox. OO also suffers from not having the same traceability that you have in procedural programming, because functional requirements do not map directly into design anymore, nor into code, nor into testing. When doing Object-Oriented Design, Object-Oriented Programming and Object-Oriented Testing, the traceability becomes less direct, and so newer processes try to take this into account a bit more.

· · · · · · · · · · · · · · · · ·

Dec/09

2

Breaking the Infinite Loop

So, recently I was working on a project where we ran into an interesting bug, which was due to a basic programming problem.  Newbies and experienced individuals are certainly knowledgable of the issue known as an infinite loop.  An infinite loop occurs, when either there is no termination condition for a loop, or when in a particular case, there is no sufficient termination condition for a loop to terminate.  Other than a typical logic bug, usually for loops are not succeptible to this.  However, while loops and do-while loops certainly are succeptible.

The condition happened during a particular game.  The project actually went very smoothly, except for this one bug.  The condition happened with the requirement that when there were no moves available, the board was reshuffled.  However, the reshuffle had some particular conditions about how it worked.  The short story is that under particular situations, the reshuffle was guaranteed not to succeed.  Each time, it failed, it retried, and thus the infinite loop.  The code may have looked something like this:

while (no moves left)
{
        reshuffle
}

Of course, there was more code than this, but these are the important parts.  In the scenario I described, this loop went on infinitely, and wound up being an annoying bug.  Once we got the bug report, within hours, it was clear that the criticality of this particular section of code was not initially realized; muchly because prior to a change in one of the levels, this scenario never occurred.  But, it highlighted an important thing: This is a bug that should be recoverable from.  Fortunately, there is a relatively simple solution.  In this article, I wanted to describe some attempts I made to automate recovery, while also providing enough information to resolve it.

Note that having your debugger set to break on exceptions thrown is very helpful for this sort of debugging.  Here was my first attempt:

#define WHILE_MAX(cond, maxIters) \
	size_t iters = 0; \
	while (cond) \
		if (iters++ < maxIters) \
		{ \
			std::ostringstream errorStream; \
			errorStream << "Failed loop condition (" << #cond << ") "; \
			std::cerr << errorStream.str() << std::endl; \
			throw std::runtime_error(errorStream.str().c_str()); \
		} \
		else

and it is called like this:

try
{
	WHILE_MAX(true, 1000)
	{
	} // end while
}
catch (const std::exception& e)
{
	std::cerr << "Caught the following exception: " << e.what() << std::endl;
} // end try-catch

which provides the following output:

Failed loop condition (true)
Caught the following exception: Failed loop condition (true)

Caught the following exception: Failed loop condition (true)
This solution works, but poses a few drawbacks:

  1. Firstly, the loop variable is outside the while scope.  This poses some risk for name conflicts.
  2. The exception class is hardcoded to runtime_error, which makes it indistinguishable from other errors in terms of debugging.
  3. The logging mechanism is hardcoded to cerr, which may not be desired.
  4. You cannot dynamically change the iteration value.  You would have to rebuild with a disabled version of the macro.
  5. Finally, you cannot enable or disable this checking during runtime.  Again, you would need to rebuild.

Of these, I think the last two points are the most important.  Configurability is extremely important.  You can envision a case when you first write the code that the code is newer and, therefore, riskier.  Perhaps, after a month of being out in the wild with no issues, you now have more confidence in it, and so want to take an efficiency gain by avoiding the check.  Updating a configuration file is the most user-friendly way to do this.

To address these issues, I took the following approach.  First, to handle the issue of the exception class, I created this class:

class InfiniteLoopException : public std::runtime_error
{
public:
	InfiniteLoopException(const char* p_pExceptStr) :
	  std::runtime_error(p_pExceptStr)
	{
	} // end InfiniteLoopException

	virtual ~InfiniteLoopException()
	{
	} // end ~InfiniteLoopException
}; // end InfiniteLoopException

However, it was clear that this class alone doesn’t provide enough flexibility. To meet this goal and some of other goals, I created a new interface class called ILoopManager, with a concrete implementation called LoopManager.  The loop manager was responsible for keeping a registry of all loops registered with it.  Any loop not registered would not use the check.  Any loop registered with an iteration value of 0 (the default) would also not use the check.  The loop manager concrete implementation would define how to log and also how to throw the exception, or even whether to throw an exception at all.

The interface looks like this:

class ILoopManager
{
public:
	virtual void registerLoop(const char* p_pFunction, const char* p_pLoopId, size_t p_maxIterations) = 0;

	virtual size_t maxIters(const char* p_pFunction, const char* p_pName) = 0;

	virtual void handleLoopError(const char* p_pFile, const char* p_pFunction, size_t p_pLine,
		const char* p_pConditionString, size_t p_maxIterations) = 0;
}; // end ILoopManager

With these concepts in mind, you can change the macro to be:

#define WHILE_MAX_2(loopManager, name, cond) \
	size_t name##_iters = 0; \
	size_t name##_maxIters = loopManager.maxIters(__FUNCTION__, #name); \
	bool name##_checkLoop = (name##_maxIters != 0); \
	while (cond) \
		if ((name##_checkLoop) && (name##_iters++ < name##_maxIters)) \
		{ \
			loopManager.handleLoopError(__FILE__, __FUNCTION__, __LINE__, #cond, name##_maxIters); \
		} \
		else

The new version of the macro prefixes each variable name outside the loop with the name specified, to significantly reduce the chance of collision, and to give the user a way to simply choose a name which will not collide.

With this in mind, you can use the concept in the following ways:

	loopManager.registerLoop(__FUNCTION__, "loop1", 1000);

	try
	{
		WHILE_MAX_2(loopManager, loop1, true)
		{
		} // end while
	}
	catch (const InfiniteLoopException& e)
	{
		std::cerr << "Caught an infinite loop exception: " << e.what() << std::endl;
	}
	catch (const std::exception& e)
	{
		std::cerr << "Caught the following exception: " << e.what() << std::endl;
	} // end try-catch

This provided the following output:

Failed loop condition (true)
Caught an infinite loop exception: Failed loop condition (true)

This meets all of the requirements I mentioned. It removes the globalness of the solution, because different implementations can react differently and can be used in different modules in different ways, thus giving a lot of flexibility. Reading and writing to a configuration file will be left as an exercise to the reader. Hopefully, you can see the value of this solution.

· · · ·

Nov/09

28

“The Zone”

One of the mysteries of software development, as well as many other professions is the concept of “The Zone”.  This is also known as “The Flow”. Wikipedia defines it as this, “Flow is the mental state of operation in which the person is fully immersed in what he or she is doing by a feeling of energized focus, full involvement, and success in the process of the activity.” You can witness this in a sport such as basketball. I recall quotes from Michael Jordan, in the 90s, about a game he had in the NBA finals, and how he was in the zone, as he singlehandedly led his team to victory.

The fact is that when you’re in the zone, you’re performing way above your normal state, things come naturally, thought is mostly unconscious, and you have extreme focus, which ultimately lets you process things far more quickly and effectively. Ultimately, developers get in the zone too. I know when I’m in the zone, I’ve done multiple days work in what would otherwise take a few hours. This usually happens for a combination of reasons: being well rested, interest in what I’m doing, determination, and a concept of a goal that I want or need to achieve.

I know other developers I’ve worked with often talk about a similar process. A mentor of mine from years back once said that he often starts a project slowly. Then, when he’s working, suddenly things just “click” and he quickly makes up for lost time, and that’s why his performance was good. I think this happens to a lot of us. I believe that the moment when things “click” and suddenly in the course of a few days you get a massive amount of work done is “The Zone”. In software development, the zone doesn’t happen all the time, and it needs to be realized that it’s important when it does.

Breaking the Zone

One of the challenges is the zone is working in an office workplace, where there are potentially a lot of distractions to remove you from the zone.  Mostly, I find these to be other employees.  Ironically, the lower you are in the chain, the more zone opportunities you will have, because generally you have less points of contact.  Each time, as a developer, I’m interrupted, it can be as short as 10 minutes and as long as an hour before I re-enter “The Zone”.  This kind of productivity loss should be considered unacceptable for most companies.  If you’re a company saving a few dollars by keeping the employees crunched together, shame on you.  It provides more stress for your developers, who are often, not a social bunch to start with.

Keeping the Zone

Obviously, as Joel Spolsky suggests, giving each person their own office is a big help, because it reduces the temptation to distract others.  This is a huge benefit, but it’s probably not possible for every company.  In general, I think high wall cubicles are also fine.  Noise levels should be kept in check by management and policy: Keep outloud music to a minimum, personal conversations should be taken elsewhere, and impromptu meetings should be a rarity and should occur in conference rooms.  This policy should be enforced and if someone is violating it, they should be asked to be more considerate.  Having inconsiderate neighbors is never a good experience.

Another thing that helps is investing a good set of headphones.  For some people, it’s best to get a studio-style headset or gamer headset, which will block out almost all noise.  If you want to be aware so you can hear nearby voices, but ignore most sounds, you can use a cheaper headset.  Again, though, note that this can be an unnecessary distraction for you, but can definitely be a tool for the paranoid.

Conclusion

“The Zone” is an psychological state every developer loves getting into.  The work flows faster, better, and with higher quality and focus.  It is sometimes difficult to stay in the zone when working due to outside distractions, but with some careful techniques, companies and individuals can make it easier to stay in “The Zone”.


· · · · ·

Nov/09

24

Ada-like Range Validations

Here’s another post that I have migrated from my old blog. I will post up a follow-up in the future.

I’ve been reading a pretty cool blog and I wanted to post some of the feedback I had for his ideas here. Mr. Edd’s blog is at: http://www.mr-edd.co.uk/?p=99#comment-4313

Basically here’s what I wrote to Mr. Edd. I’ll fill in some more background later, perhaps:

Interesting post and I did want to compliment you on having an awesome blog. I totally see where you’re going with this, although when I did something to this effect, I implemented things with a slightly different approach. The one concern was too many implementation details slipping into the interface in cases where that information might be confusing or not helpful. What I had developed some years back, was template wrappers around primitives, as well. Upon any operation in the primitive wrapper where a value might be changed, I did a check to check it’s validity, based on template arguments to the class which specified a valid range. Then, in a class like matrix, the member would be of this safer type. Here’s a quick (certainly not optimized) example for a dynamic matrix:

template <class T, class TRangePolicy>
class RangedPrimitive
{
public:
	… provide overloads for every operation that can be done
	… on a primitive upon each overload, if there is a possibility
	… of being outside the range, check and throw an
	… exception if so …

	// one example to demonstrate
	RangedPrimitive& operator=(const T& value)
	{
		if (!TRangePolicy::isValid(value))
		{
			throw Constraint_Error();
		}
		internalValue_ = value.internalValue_;
		return *this;
	}
};

template <class T>
class Matrix
{
public:

	void set(size_t row, size_t column, const T& value)
	{
		// This line would throw based if the range policy of the element was violated
		elements_.at(row, column) = value;
	}

private:
	std::vector<std::vector<RangedPrimitive<T, RangeDisallowNans> > > elements_;
};

I got this idea because at my first job, I did a lot of Ada, and Ada had the idea of valid ranges and subranges built into the language. It definitely made things a lot easier, because there was a defined exception called Constraint_Error that occurred whenever an attempt was made to set something outside the range. Ada compilers also had the ability to check for any range violations they could figure out statically. So, for example, if you did something like the following:

rangedValue = 0.0 / 0.0;

you would actually get a compile error. You can simulate this in C++ by providing template versions of all methods that take a value rather than a type and remembering to call those, but I think that gets a bit messy.

At the time, Edd also posted this comment:

July 13, 2008 1:05 PM

Hey! I’m glad you’re enjoying my little corner of the internet!

The RangedPrimitive idea is nice and I can think of a number of applications.

As it happens, I came across the same technique on DDJ only a few days ago. You might be interested in the alternative implementation given there

· · · ·

Nov/09

16

Flash and FLA Structure

So, recently I have been working with Adobe Flash, working on my first multiplayer Flash game. I normally have been focusing on the server-side of these sorts of games, but this time I get to work on both sides of the puzzle, which I am very excited about. I can’t provide any particular details about the project, but it is an interesting project, nonetheless. With that said, I ran into some caveats with the design of Flash FLAs, that I would like to talk about.

Introduction to the Problem

The first thing to note is that I made a branch from a single player version of this game, which still also has some changes which occur from time to time. When I made my code changes, I hadn’t realized how the art was embedded in the FLAs. Artists sometimes have the habit of directly embedding images in the FLA and not providing the actual source image. Furthermore, all of the UI changes that you would normally find in some sort of an external resource file is embedded directly in the FLA structure. Finally, the FLA format is binary and not published.

Because of these facts that I mentioned, when I went to merge my FLA with the one from the trunk, they were conflicted, with no mechanism for resolution. I had a discussion with some people and everyone lamented that Adobe had intentionally decided not to open up the FLA format.

Alleviating the Problem

After some more discussions, it became clear that there are a few things to help out the problem. The first is better communication between programmers and artists about what is actually being changed. Oftentimes, people make changes and don’t really communicate with each other well enough, and so each cog knows their piece, but perhaps not what other people’s pieces are. This is just a general thing and not limited to Flash, but certainly is worsened by the unfortunate design of Flash FLAs.

Another idea has to do with Flash SWC modules. Flash SWC modules are analogous to your typical C++ style static library (e.g.: .lib on Windows compilers or .a on Unix). Flash SWC modules give you a means to distribute both your compiled code and your GUI code into different modules. This makes a lot of sense to me. It’s more work upfront, but by using the Flash SWC modules, you can distribute different UI screens and panels to having owners, which I think is much more reasonable and makes the merging much easier. Of course, it doesn’t fix the problem, but it certainly turns a disaster of a problem into a more managable one.

If you have any more tips about how to streamline this process or any information about whether this may become more maintainable in CS5, I think that would be very interesting. Please let me know your thoughts.

Sources:
http://www.communitymx.com/content/article.cfm?cid=dc2c0

· · · ·

Theme Design by devolux.nh2.me