Saturday, August 12, 2017

Stress Free Programming

Code is a manifestation of understanding.

A programmer gradually builds up programming instructions based solely around the knowledge they have acquired. If they don’t know or understand something, it is unlikely that the runtime behavior of their code will meet expectations.

In reading code, we usually get a sense of the clarity and depth of the original author. If they were confused, the code is confused. If they saw the problem clearly, the code lays it out in neat organized blocks. In that sense, it isn’t all that different from writing, in that the writer leaves personality traces in their text as well, it is just that for writing that makes it interesting while for code it adds accidental complexity.

Reading good code can be similar to reading a novel, in that, during the process the reader isn’t aware of the syntax and grammar, but rather just visualizing the story as the author has laid it out. This requires that the writing of the text isn’t awkward, so the reader flows smoothly through the work. The same is essentially true of well-written, readable code. The reader gets the narrative from the code, without necessarily getting bogged down in the syntax or underlying calls. That narrative explains how the data flows or the components interact. That is its story.

If code is quick and easy to read and confers a strong understanding of its purpose, it is elegant. This obviously requires a significant editing stage, just like editing a novel. The code is reworked, over and over again, until it flows.

Some types of changes intrinsically degrade the quality of the code. Often when programmers locally optimize, their work ignores the greater context, so it is really just causing obfuscation. They’ll add in parts that don’t fit or flow with the surrounding code. This degrades readability, but it also clouds the overall understanding.

When a programmer is rushed or under tremendous stress, their thought process is impaired. They don’t have time to think, they just need to act. They skip over the details and relationships, just banging out the most obvious localized manipulations possible. It is as if someone was just adding random tidbits to an existing novel that barely relate to the original story.

Programmers need to get into the zone ( They need this so that they can visualize the largest portions of the code possible; so that their changes fit into the overall context. Big programs are never one or two person shows, they are too large, they need a team to get built. And they need a team to keep them from rusting.

In the zone, programmers can hyperfocus on visualizing how the data and code relate, which will allow them to organize it in their heads first before committing to making changes. They need to have the time to edit and re-edit as necessary until they have added as much readability back in the code as possible. They need to remember and cross-check a huge amount of moving parts, and to ponder the accumulating corner-cases.

If you want to study and learn a difficult subject, a library or quiet room is the best place. It minimizes the distractions. Even though most programming is routine, the collective weight of any large system makes fitting in new code a difficult subject. Disruptions lose the context; flipping back and forth between something else like Googling just slows down the work and likely causes the programmers to skip over important details.

Code needs time, patience and focus, that is the only way to enhance its quality. To ensure that it runs as expected. Bad quality doesn’t show up right away but gradually percolates until the code is unsustainable.

If a programmer is rushing through their work, they are doing this by deliberately ignoring a lot of necessary details. They can call a library to do something for them, but they may not understand where it is used correctly or what is happening underneath. They may be relying on really bad assumptions or fundamental misconceptions. Code that is based on that type of rapid construction is fundamentally weak, and even if it looks reasonable it will generally fail badly at some point. It can be pounded into functioning for sunny days, but it won’t survive any storms.

The programmers may also just skip the higher level organization hoping that it will magically sort itself out along the way. That is always a fatal tactic, in that problems only ever grow and eventually grow so large that they cannot be fixed in a reasonable time frame. Building a large system on top of a bad foundation is an easy way to fail.

The history of software development is all about having to write crazier and crazier code that just dances between these unnecessary artificial messes that we have allowed to build up through our bad practices. Most common programming problems are unnecessary and a direct byproduct of rushing through the work. This slows us down and increases our stress.

Stress eats up intellectual resources and it degrades the effort put into producing good code. If there is enough stress, the results are often code that is so hopelessly littered with interconnected bugs that no amount of reasonable effort will ever correct it. We live with a lot of awful code, we have just grown used to it.

The best code comes from studying a problem, doing some research and then visualizing the results into a clear articulation of the data and logic. It requires many passes of editing and an attention to detail where we dot the i’s and cross the t’s. If the time and patience is put into crafting this level of work, it will remain viable for a very long time. If not, it rusts quickly.

Good systems slowly build up large collections of reusable components that make the development process faster and smoother as time progresses. Bad systems build up artificial complexity that increasingly prohibit further effort as time goes on. They get worse as they get larger. As the problems grow out of control, so does the stress which feeds back into problems and thus many projects fall into this vicious downward spiral until they finally implode.

There is always some code that is throw-away. We might just need something to fill in, for now, it can be quick and dirty. But most code ends up filling in the problem and gets built upon, layer after layer. Even if it started as throwaway code, it never gets tossed.

If it is really foundational code, it makes no sense to just whack out crap in a high-stress environment. That is counter-productive since everything built on top needs to be arbitrarily more complex than necessary.

Even if the upper layers do correctly cover all of the artificial corner-cases, they do so at a disadvantage and replication of the mess they were intended to cover. That often just delays the inevitable.

To get stress-free programming, coders need a quiet environment. They need it to be neat and organized and they need any and all distractions removed. They can’t have loud music or texts popping up on their screens. They need to remain in the zone for as long as possible and they need a place where they can easily get there.

They also need to partition out the analysis, design and coding work from each other. Intermixing them is inefficient and disruptive. If you have to stop coding to search for missing data, or to investigate the data model, or to make architectural decisions, those interrupts become reflected in the results. If the workflow is messy, the code is too.

Like any good craftsmen, the tools and supplies should be assembled before the coding work begins, so that the flow is as continuous as possible. It will hold together far better in the first draft if it is a single connected effort.

But like writing, there are also many editing passes that are necessary before the rough work becomes smooth and pleasant. That is a slow process of going over and over again to refactor the code at parts that are awkward to read. It is a different skill than the initial programming and takes a huge amount of time to master. It is, however, the very essence of quality. Few humans can write out a novel in one pass, the same is true of code. Mostly we try for one pass and then ignore the ugliness of the work.

Time is the one declining resource that programmers need the most. We lived in a rushed society and we run around like chickens with our heads cut off because we are constantly told this is the path to success. It isn’t. It’s the path to fruitless mediocrity and somehow this 21st-century madness has embedded itself into everything including software development. People routinely want a year's worth of work done in a few weeks, so it is no wonder that the results are so craptastic.

Programmers quite obviously need more time and they need a reasonable environment where the expectations are back under control. By now we should be analyzing the various causes of stress in software development and finding ways to ensure that this doesn’t degrade the quality of the effort. However, we went off for at least a decade in the completely opposite direction and proceeded to inject far more stress into the workflow through a misguided attempt at gamification. Slowly the reality of those bad choices have made their way back into the mainstream, we are hopefully on some saner path of development these days. Programming will never really be stress-free, there are always too many unknowns since the full amount of required knowledge is vast, but if we must rely more and more on code then it should be code that we have at least tried to make correct. To increase our abilities to do that, we need to think more, and to do that we need less stress in our environments. It sounds great to just hack at some code with reckless abandon, but would you ever really trust that code with the important aspects of your life? “It kinda works” is a very low bar, and often far more destructive than useful.

Saturday, June 3, 2017

Stores of Value

Lately, I’ve been asked several times about what I think is the underlying value of cryptocurrencies. With their increased popularity, it is a rather pressing question.

Although I am not an economist, or a financial engineer, or even particularly knowledgeable, I have managed to glom onto little bits and pieces of knowledge about money that are floating about. Although these ideas all appear to fit together, I reserve the right to change my mind as time progresses and I learn more.

In a world without money -- if all other things were equal -- getting paid would be a bit problematic.

I might, for instance, go to work and in exchange for my efforts receive a chair as payment. I could roll that chair over to the local supermarket and exchange it for some groceries, and possibly a grocery cart as change. Once I returned to my house, I would have to take all of the tangible items I had received during the day and store them in various rooms, hopefully, to be available for rainy days. My retirement savings would require a whole warehouse.

Of course, that would be a massive pain and completely impractical. The size of the house I would need would be enormous and the bartered goods would get gradually covered with dust, or decay, or entropy would find some other sneaky way of breaking them down.

Quite obviously, someone would come along and provide a service to allow me to drop off my items, in exchange for keeping them clean and safe. However, they would have considerable incentives to not just waste storage space either. They might then trade my newly gotten chair back to the same company that I work for, and that could actually form my payment for my next day's effort. In fact, I might end up getting the same chair, over and over, stacking it up in my inventory count until I’d have to withdraw, say, a sofa to get some plumbing fixed in the kitchen.

Clearly, that would be a rather insane society, so it is little surprise that we stepped up from bartering to having a means of keeping an abstract count. We don’t need to pass around physical items, just a “metric” that relates them all to each other. In that sense, money is a store of work, of the effort that I have put in, but it is not quite that simple.

The banks act much like that barter storage service, but they are regulated to only have the means to multiple money at some fixed percentage. That is, they can loan out the same chair, again and again, but they are ultimately restricted on how many times they can do that because they always need to keep a minimum number of chairs in storage. Given their multiplicative effect (and others), money is more realistically a small percentage of the work, not 100%. For argument's sake, let's just say that it is 20%.

Now, we use money internally in a country to provide daily interactions, but we also use it externally between countries. In order to minimize any internal disruptions, the sovereign central banks are allowed to play with the underlying percentages. So they might reduce that 20% underlying value down to say 15% because they want their country's stored values to be more competitive on the world markets.

It is of course considerably more complicated than that, but this perspective helps to illustrate that there is a ‘relative’ internal value to money, as well as a more ‘absolute’ external one and that much of what governments are doing is manipulating the underlying value to help the internal economies remain stable.

There is often talk of the world’s economies moving away from the gold standard as being dangerous or reckless, but a close friend of mine pointed out that it actually stabilized the system. Although money is a partial store of work, we like to peg it to something tangible to make it easy to compare different values. Gold emerge as this anchor, but it actually has a significant problem. The world’s supply of gold is not constant. Some of it might get used up for jewelry or industrial efforts, and there is always the potential for new, large mines to get discovered and flood the market. By pegging against a moving target, we introduce unwanted behaviors.

Since money is more or less a store of work, and we have issues with needing to interchange between nations, it seems most reasonable to just assume that the underlying value is essentially a country's GDP. That is, money is really backed by all of the products and services produced by a country. Watered down to be sure, but still anchored by something tangible. Except for wars and other catastrophic economic events, the amount of work produced changes slowly for most countries. The economies grow and shrink, but they don’t usually get spikes.

With that oversimplified introduction in place, we can almost talk about cryptocurrencies, but I still have one more tangent that is necessary first.

When it was first discovered, researchers found that they could quite easily create electricity in their science labs. This was long before our modern electrical appliances; it was what was essentially a steam-punk era. From their perspective, electricity was most likely just a curiosity. Essentially worthless. They would create it in small amounts and play with it. It was just some sort of parlor trick.

A modern electrical plant can generate a huge amount of electricity, sending it over the wires to the millions of potential consumers. It has a significant value. As we create more devices like light bulbs and radios that need power, electricity changed from being worthless to having a very strong underlying value. But without electrical devices to consume it, there is no value.

We couldn’t live without electricity now, and there are, no doubt, plenty of people that made their fortunes by supplying it to us. It’s a strong utility that drives our economies.

This shows quite clearly that value can be driven only by demand; that something with no value can, in fact, be quite valuable if there are consumers. Keeping electricity in mind while discussing the current worth of cryptocurrencies is useful.

So with all of that in mind, we can finally get to cryptocurrencies.

It is somewhat obvious from the last analogy about electricity that if you produce something, to gain value there needs to be demand. For Bitcoin, it seemed like the first few years the demand was driven by curiosity or even entertainment. There were no real “electrical appliances” just interested people playing with it. So there was some value there, but it was tiny.

As different segments of the various markets came in and found uses for Bitcoin, the underlying value slowly increased.

The addition of other competitive cryptocurrencies like Ethereum appears to have strengthened Bitcoin’s value, rather than hurt it.

A good indication that there is something tangible below is that both Bitcoin and Ethereum have weathered rather nasty storms: Mt. Gox for one and The DAO for the other. If they were just 100% pure speculation, people would have walked away from them, but instead although they took significant hits, they recovered. Fads and Ponzi schemes don’t weather storms, they are just get forgotten.

As the cryptocurrencies break into new markets and there becomes more and more options for spending them, their underlying value increases. What drives them underneath is how flexible they are to use. If there are lots of places to use these currencies, and it is easy to switch back and forth, then as the roadblocks come down, the underlying value increases. So there is some intrinsic underlying value and it is increasing, probably quite rapidly at this point.

A hot topic these days is whether or not countries are for or against cryptocurrencies. Popular sentiment ascribes the motivations behind central banks as a means to control the masses, but I’ve never really seen it in that draconian light. Countries control their currency in order to try and stabilize their economies and to control their debts to other nations. They’re probably not too worried about individual spending habits. Most countries want their citizens to be successful.

They do need, however, to collect their ‘cut’ of commerce as it flows; that is how they exist. Cryptocurrencies don’t have to mess with this mechanics at all. That is, a country can still strengthen or weaken their money supply, and the fiat currency is still the final arbitrator of worth within the borders and between other nations.

Any deal with the Canadian government, for example, would be denominated in Canadian dollars. That won’t change even if all of the citizens used Bitcoin for their daily transactions. The dollar would still be in the underlying measure.

Cryptocurrencies being global offer citizens a way to essentially hedge their funds externally as well. Now that was always there, you can easily hold a US dollar bank account in Canada and then you can use it when you travel abroad, for example. Doing so eliminates any FX risk from travel expenses, and if you restock the account during periods of better exchange rates, it boosts the spending power of the money by a small amount.

If it works for a Canadian with US dollars, then there is essentially no difference in doing it with Bitcoin when we are talking about external transactions.

The funny part though is what happens if all of the citizens pay for their daily goods with Bitcoins. When it happens with internal transactions, does that somehow diminish the government's ability to heat or cool down the economy?

There would obviously be some effect. If Bitcoins were the better deal people would switch to using them. If they crashed, they would all return to Canadian dollars. That really feels like it is an asymmetric hedging effect. The underlying transactions would shift towards the best currency. But it is worth noting that all of the taxes would still have to be paid in the fiat currency and the people and vendors would likely try to naturally avoid currency risk and bad exchange rates when collecting these amounts. Canadian dollars would still be preferred.

Thus a fiat currency anchors the underlying value, and it wouldn’t surprise me the least if as the cryptocurrencies became more popular; that they had somewhat different valuations in each different region. That is, the country's own financial constraints will come to bear on the cryptocurrencies as well, and the central banks will lose only a small amount of control over the value. People would then try to arbitrage on the regional differences, but the summation of these effects would likely tie all of the fiat currencies closer together. They would collectively become more stable. More intertwined.

Realistically this has been happening for at least thirty years now. Globalization has been making it easier for currencies, and people, to flow. That relaxation of restrictions on the movement of cash naturally draws all of the players closer together. That, of course, means that one country's fate is more dependent on the others, but it also true that bigger organizations are intrinsically more stable. They move slower, but that is sometimes an advantage in a volatile world.

The cryptocurrencies are just going to increase that trend, to allow it to become more fine-grained and to get ingrained into the masses. Thus, they likely are in line with the financial trends in the 21st century, maybe speeding up the effect somewhat but definitely not changing the dynamics.

All of this gets me back to the real underlying question, which is what are the cryptocurrencies ultimately worth today. If we compare them back to fiat currencies we can start by looking at all of the work going into to produce them: the hardware, network connections, operations and electricity. That output is analogous to the GDP of a nation and it provides a definitive underlying value, but we also need to consider the analogy to electricity.

The base work of maintaining the Bitcoin infrastructure is contingent on there being demand. That is, as the end-points -- the various uses of the cryptocurrencies -- increase they supply underlying value to the effort of keeping the coins running. The two grow hand in hand. As more electrical appliances found their way into common usage, the value of electricity increased.

But it also worth noting that particularly for Bitcoin, the underlying market is in an expansion phase. It is rapidly growing to meet brand new needs, it is nowhere near saturation. This seemingly unlimited growth, as it is fed by more and new types of demand is in itself a temporary value. Being ‘expandable’ is a strong quality because it provides optimism for the future, which while intangible is still a driving force.

On the other hand, there are intrinsic fixed technical limitations to the cryptocurrencies, and to those that are bound to high costs of PoW (proof-of-work) in particular.

No technology is infinitely scalable, the physical universe always imposes limits, and these technical limitations, rather than the full potential of the market itself, are where the saturation points arise. Thus the market will grow as large and as strong as the technology allows, not the current potential of the markets for the entire planet. That makes the issues about transaction speeds and backlogs far more pressing since they seem to imply that the saturation point is coming far sooner than expected.

All told, we seem to have at least four distinct parts to the valuation of cryptocurrencies. There is obviously some element of speculation, there is an underlying value to maintaining the infrastructure, there is the ability to expand and finally, there is increasing demand. We could likely model these as essentially separate curves over time, based on the historic data, and use that to attribute their various effects on the prices. It would be trickier in that some of the issues actually span multiple cryptocurrencies, and the underlying base is somewhat volatile right now, as different actors switch between the competing technologies.

Still, it says that it certainly isn’t 100% speculative right now and that the coins should be moving towards some specific value. The rate of return is driven way up by possibilities of expansion, but that will eventually converge on what is essentially the collective changes in the GDP for the planet.

Is that higher or lower than today’s current price? That would take a huge amount of analysis, and likely a large number of lucky guesses, but it does seem that much like electricity, cryptocurrencies will eventually become a normal utility for our societies and that collectively they are undervalued right now.

Any individual coin though might be overvalued, specifically because of its technical limitations, but those are also subject to change as we increase our knowledge about how to build and deploy these types of systems.

So it appears very unlikely to me that the cryptocurrencies will just blow up one day and hit a valuation of 0. They’re not a Ponzi scheme, but rather a fundamental change to our financial infrastructure. If a total crash could happen, there have already been plenty of chances for it yo occur already. There is real value underneath, and it is really growing.

But caution is still required for any given currency, and it seems that spreading the risk across multiple ones is not only wiser but also in line with the observation that they seem to be stronger together, than individually. The situation, however, is rapidly changing, and it will likely take at least a decade or two before it settles down.

Cryptocurrencies are here to stay, but there is still a long road to go before they settle into their utility state. And it’s no doubt quite a rocky road. 

Tuesday, March 14, 2017


The primary goal of software is to be useful. In order to do that, someone with a problem needs to use it to build up a ‘context’ and then execute some functionality. This context can be seen as all of the inputs necessary for their computation, whether it is persistent data, current configuration or the results of navigating from a broader context down to a specific one.

As an example of that latter input, for a GUI application, a user might start the program and select a number of menu items and buttons that gradually get them closer and closer to being able to run the desired code. They might open a file, move the cursor to a specific line, then enter some text. When satisfied, they will finally save the document, as a file, for long-term persistence on a disk or flash drive somewhere, achieving their goal.

The functionality that they require is really to modify an existing persistent file of a given format. Everything else they have to do, starting the app, picking file, changing it, etc. is just building up a context of inputs so that the save happens in the way they expect it to. Consequently, if the editor dies before the file is saved then they have to go back and navigate through the context again. It’s not over until the final functionality is completed.

For a command line environment it is the same. Most often the user works their way into a directory, might change the file with something like sed or awk redirected into a temp file, and then copies the modified file on top of the old one. Their navigation through the various commands is nearly equivalent to utilizing a series of widgets in a GUI.

In that sense we can frame any user action with a computer as a long series of navigating down to specific contexts to fire off some code. Programmers often focus on what that code is, and how it works, but from a user’s perspective the ease or pain of navigating is what they really see.

When programs are small, the amount of functionality is small enough that it can be pushed up to the top level. It really doesn’t need any serious organization. But as development continues and more and more functionality becomes available, knowing some functionality exists and navigating to it become larger and larger problems. In many active but ancient products, a great deal of the features just aren’t used because they are nearly impossible to find when needed. They exist, but they are buried in unexpected places.

To counter this, the focus should change as the code grows, first as needing new functionality, then it switches over to finding ways to reorganize the interface so that all of that existing functionality is still usable. This of course involves heavy refactoring, which is hugely expensive as the underlying layers of code have become locked into position, so it is rarely done.

It is also extremely difficult, in that the designers need to re-imagine every aspect of the program from the top down, from the perspective of the user. And then they need to overlay an organizational philosophy on top of everything so that it is clearly intuitive to most people on where to navigate to achieve their goals. There isn’t, that I know of, some magical means of applying ‘normalization’ such that it makes this process mechanical, and whatever arrangement they come up with will eventually degrade as the functionality continues to grow. It’s a perpetual problem.

What seems to occur in practice is that the programs just gradually decay until they are essentially stuck. The navigation becomes so bad, and arbitrary, and the work to fix it is so large, that almost no more forward progress is made. Generally this is followed by another program from someone else, with a different organizational perspective getting written and taking away most of the users. Eventually that one stalls too, and the cycle repeats itself. In that way it may seem like we are progressing, but often the case is that the replacements simply ignore old, but required functionality, which when added hastens the inevitable outcome. These days it seems more like we are just going round and round in circles.

It is far easier to ignore the navigation and just focus on grinding out functionality, which is why it is common practice. When software was young, this approach made a lot of sense, in that there was a real need for new features. But most of the industry has matured now, and there is a plethora of functionality available for just about every imaginable computation. It’s no longer an issue of having to create the functionality, rather it has become one of trying to find it. This change does not seem to be reflected in our development cultures. Yet.