The Software Purist |

TAG | server

Dec/09

27

Front-End vs. Back-End Development

Hi all. There was a bit of break because of the holidays, but now I’m back with a new post that I hope you will enjoy. In this edition, I wanted to discuss some of the differences between front-end and back-end software development. I would say that one of the interesting things about my career, to date, is that I’ve probably had a pretty even split between front-end and back-end software. As such, I’ve noticed some interesting differences between the two, particularly with how they are approached. When I speak of Front-End development, I’m really speaking more of visual-related code, particularly GUI code. As such, note that some of these observations don’t apply as heavily to related areas. When I speak of Back-End developmet, I’m generally speaking of the logic on a multi-user server application, such as a socket server, which handles requests from many users. With that, let’s get started.

So, the first thing to discuss here is ease of development. I’ll admit, outright, that this is a bit of an unfair category, because it will significantly vary depending on framework. For example, using some well-designed frameworks, setting up the GUI can involve a lot less programming. If you use one of these frameworks, a lot of the work can even be done by artists or designers, without much intervention by developers. Of course, if you’re doing more low-level GUI work, such as pure Win32 or XWindows/Motif, this doesn’t really apply to you. Having said that, for many applications, choosing a relatively high-level GUI framework is commonplace, unless there’s restrictions that prevent this from occurring, such as performance reasons or customer requirements. So, from the front-end side, you often wind up in a state, where the framework you’re using is already in place, and you’re making small extensions and the application basically derives from that. From a pure GUI standpoint, while designing the overall architecture is important useful, in some cases, especially in smaller applications, you can get away without it, or possibly define it later, through some refactoring techniques.

From the back-end side, the framework you’re using is often lower-level. Even if it’s higher-level, you still need to architect an application to integrate well for future needs and a lot of user requests, including making use of multiple cores and taking advantage of latency. Back-end work mixes both low-level and high-level concepts, and there is often a significant amount of work required to get something up and running. In many back-end applications, the logic on the back-end can be heavy, for the reason of security. At other levels, the data may not be trusted. On the back-end it is. Furthermore, back-end software typically operates with a database. The database has the interesting property of both making things simpler, in terms of coordinating many simulatenous requests, and also more complicated than typical front-end data storage. For example, on the front-end, it may be perfectly acceptable to store all data as XML and write all data to file immediately. This of course, would not scale to the back-end. Furthermore, the back-end has to worry about synchronization. Front-end applications can potentially get away with not making great use of idle processing time, but this is often not so on the back-end.

The front-end is a world where there is a lot more instant gratification. For example, you can design a new button, click a wizard option to add a handler, enable/disable the button at certain times and handle the button press, but initiating a file to be saved. You can code this and be testing, verify it and have new functionality to show the same day. This is usually not the case for back-end development. On the back-end you might be designing functionality such as saving some data to the database. So, you write the code to handle the request message from the client, you write the stored procedure you need (possibly to be tweaked by a DBA later), you code up the response to the message. After the same amount of time as spent on the GUI, are you done? Unfortunately, no. Firstly, having gone this far, the functionality isn’t actually verified; so it can potentially look like you haven’t done any work. This will continue until there is front-end code to interact with the back-end. However, sometimes the integration process can get messy, so it’s best to sprearhead problems before they can occur. Therefore, the next step is to write an automated test to restore the database to a known state, simulate the message occurring, verify the expected response and then verify the database has the correct data. You may require n of these types of tests, until you’re at a  point where you have confidence. From there, are you done? Alas, no. You still need to handle many users, so now it’s time to test many users performing the same thing. You would write an additional automated test, with a high number of users, repeatedly performing the same action: restore db to known state at beginning of test, simulate the message, verify the expected response, and verify correct data for that user… repeatedly for the number of users. Failure can happen at any time, so you might have cases where 8 out of 10,000 attempts fail, and you need to look at this and know why. Until your test passes, you’re not done. Then, until the front-end is making use of your code and integration has happened, nobody has seen it: which can be days, weeks, or longer. So, verification takes a very different route. This can be somewhat spearheaded by having the same developer work on both the front-end and the back-end for a particularly piece of functionality. There’s some merit to this approach, so I generally applaud this attempt.

Of course, this is not to say that there is no testing on the GUI. There often is, but for many companies, it’s about putting the work where there is the most value. Take the case where there’s 8 failures out of 10,000. This sort of scenario can happen on the GUI just as easily. The difference is that it may only occur when a particular user uses the software in a particular way. So, perhaps the issue is only noticed by 1 out of 1,000 users, because the other 999 don’t click as fast, or don’t click as repeatedly. As sad as it is to think about, being a software purist, these are the sorts of issues that are unlikely to get fixed, anyway. The type of testing that sometimes happens for GUI code is unit-tests when proper use of the Model-View Controller design pattern has occurred, and the model and control have been kept relatively framework-neutral. This can be difficult to achieve with some frameworks, so I don’t see unit testing happening on the GUI as much as it should. The second type of testing that often occurs on the GUI is using an automated framework that actually simulates a user clicking various buttons and runs through these scenarios. I see a lot of value in this, but often the software is very pricey. Most companies I’ve worked at have shyed away from this option because of the high cost involved.

So, anyway, hopefully this was an interesting discussion. This certainly isn’t the last discussion I will have about the differences and in future articles, I will talk about more steps to streamlining the process.

· · · · · · · · · · · · · · · ·

So, one thing you might be wondering is why I titled my blog,The Software Purist. One friend even surmised that it had to do with some clensing program where I would annihilate all the impure programmers. While a humorous suggestion, it wasn’t what I was aiming for. 😉  The long and short of it is that at a previous job, two different managers used to refer to me as a C++ purist, for my take on approaching tackling issues when programming in the C++ language. It was generally meant more as a compliment, because I believe they respected me “sticking to my guns”, so to speak. My general mentality at the time is that all problems can be solved by using well-defined methods, best practices and always maintaining a preference for anything standard or that has the possibility of becoming standard in the future (such as some of the Boost libraries). So, if there was an approach, my general methodology was a question of, “How would Scott Meyers solve this problem?” It’s difficult to get more pure than following Scott Meyer’s advice in Effective C++, at least at the time.

Now that we’ve been through that intro, perhaps some definitions, from my perspective would be useful. There’s two camps of extremes in software development. First, there’s the purists. Purism is about following language standards, following best practices, avoiding legacy features, ignoring language loopholes and using the language as intended by its authors. For example, a purist, in C++, might completely avoid and ignore C++ macros, except when necessary for things, such as header guards. A C++ purist might also prefer to use std::copy instead of memcpy, even if either solution would work, performance was equal, but memcpy is outdated. A C++ purist would use std::string instead of MFC’s CString or Qt’s QString. I know, because I did this and generally stick to this advice, unless it gets in the way.

Pragmatism is exactly the opposite. Pragmatism dictates that the most important thing is getting the job done. If you’re a pragmatist, your goal is generally to get the product out the door. So, if macros streamline your process and make it quicker to code something, this is more important than following a set of recommendations, because you can get your product out the door faster. Likewise, memcpy is a few characters less typing than std::copy and you’re more used to it, so you use it over std::copy, even though iterators are considered “safer” than pointers. Finally, you might use CString, because it gives you direct access to the internal memory, so you don’t have to wrestle through an abstraction and you can use a C-style API if you choose.

Now, these are all fair and valid views. Yes, that’s right. Both views are fair, both are valid. Both are also extreme. We know from many aspects of life, that extremes are generally bad. A temperature of too hot or too cold is uncomfortable. Driving at a speed of too fast or too slow is uncomfortable. And so on… The same holds true for software.  Ultimately, we all evolve as developers. I have a theory that many developers start out as purists and start to migrate towards gaining more pragmatism each year, once they become more seasoned with more business experience. Anyway, as most developers know, it can be a problem to be either too pure or too close to pragmatism.

If you’re too pure, you will probably think every pragmatist’s code is terrible, even to the point of saying that they’re “under-engineering”. I know, because I was there, a long time ago. In some situations, what’s called for is simply getting the job done. Businesses have a need to have a product ship and a product that doesn’t ship is a failure, even if the code was beautiful. Purism often has a large focus on long term goals, which is definitely beneficial. The secret knowledge that purists don’t want to let out is by following purist methodology: 1) The coding becomes very mechanical and repetetive, which is great for developing, because if you can reuse patterns of development, it gets easier and becomes more natural each time. 2) The purist has a keen sense and sensitivity to maintaining the code later and they know if they take a shortcut now, they will be grunting and groaning when they have to look at it later. The truth is that these are really valid points, and in a lot of situations, this is the right approach. Of course, there’s some items that have to be compromised in the name of getting things done. On the flip side…

If you’re too pragmatic, you will probably think every purist is overengineering. Why build a packet class for every packet to the server? You can more easily code it inline, in one large source file, right? The problem with this approach is when you need to maintain it later, such as putting it in a test harness, all of this hardcoded logic becomes a mess to fit in. It’s difficult to extract the 80 out of 200 lines of a large function that you actually want to test, while stubbing out the other 120 — this necessitates refactoring. Both extremes find that refactoring is time consuming. Extreme purists find that in reality, it’s difficult to find time to refactor, so they try to code in such a way to avoid this step. Extreme pragmatists find that it’s difficult to find time to ever refactor, so they just never bother with it and the code is messy forever. Refactoring is one of those concepts that works is good in practice, but unless you get everyone to buy in, it doesn’t happen. Extreme pragmatists often don’t buy into refactoring; they got used to the mess, and have developed a mental map of the shortcuts, so they can often work effectively, despite the challenges. Extreme pragmatism creates a potentially difficult work environment for coworkers when done to extremes, because it becomes a mind field for the uninformed to trip over.

Ultimately, as we know, any sort of extremist views should generally be avoided. There is never always a single answer to any problem. Development has to be done with care and the beauty of the code is imoprtant. However, don’t lose sight of actually shipping the product. There must be a balance. If you feel like you are in the middle and you are accused of either overengineering or underengineering, it’s very possible that the person you’re talking to is an extremist. As for The Software Purist, my general approach now is to stay somewhere in between, but I still lean a bit towards the purist side, because ultimately, I have a high appreciation for the beauty of code.

· · · · · · · · · · · ·

Theme Design by devolux.nh2.me