Work, Life And Side Projects | Smashing Magazine
There is no doubt about it, I am a hypocrite. Fortunately nobody has noticed… until now. Here’s the thing. On one hand I talk about the importance of having a good work/life balance, and yet on the other I prefer to hire people who do personal projects in their spare time.
Do you see the problem with this scenario? How can one person possibly juggle work, life and the odd side project? It would appear there just aren’t enough hours in the day. Being the arrogant and stubborn individual I am, when this hypocrisy was pointed out to me, my immediate reaction was to endeavour to justify my position. A less opinionated individual would probably have selected one or the other, but I propose these two supposedly contradictory viewpoints can sit harmoniously together.
Can you have your cake and eat it, by working on side projects, holding down a job and still having a life beyond your computer? Image by GuySie 1.
To understand how this is possible we must first establish why a work/life balance is important and what role side projects play. Let’s begin by asking ourselves why it is important to have a life beyond our computers, even when we love what we do.
Why We Should Have A Life Beyond The Web
Generally speaking Web designers love their job. In many cases our job is also our hobby. We love nothing more than experimenting with new technology and techniques. When we aren’t working on websites we are tinkering with gadgets and spending a much higher than average time online. Although in our job this single-mindedness is useful, it is ultimately damaging both for our personal wellbeing and career.
In the early days of my career, when I was young, I used to happily work long hours and regularly pull all-nighters. It was fun and I enjoyed my job. However, this set a habit in my working life that continued far longer than was healthy. Eventually I became stressed and fell ill. In the end things became so bad that I was completely unproductive.
This high-intensity working also sets a baseline for the whole industry, where it becomes the norm to work at this accelerated speed. No longer are we working long hours because we want to, but rather because there is an expectation we should. This kind of work/life balance can only end one way, in burnout. This damages us personally, our clients and the industry as a whole. It is in our own interest and those of our clients to look after our health.
This means we cannot spend our lives sitting in front of a screen. It simply isn’t healthy. Instead we need to participate in activities beyond our desks. Preferably activities that involve at least some exercise. A healthy diet wouldn’t hurt either. Getting away from the Web (and Web community) offers other benefits too. It is an opportunity for us to interact with non Web people. Whether you are helping a charity or joining a rock climbing club, the people you meet will provide a much more realistic view of how ‘normal’ people lead their lives.
This will inform our work. I often think that, as Web designers, we live in a bubble in which everybody is on twitter all day, and understands that typing a URL into Google isn’t the best way to reach a website. Not that this is all we will learn from others. We can also learn from other people’s jobs. For example, there is a lot we can learn from architects, psychologists, marketeers and countless other professions. We can learn from their processes, techniques, expertise and outlook. All of this can be applied to our own role.
As somebody who attends a church (with a reasonable cross section of people) and used to run a youth group, I can testify that mixing with non Web people will transform your view of what we do. Furthermore, the activities you undertake will shape how you do work. Reading a non-Web book, visiting an art gallery, or even taking a walk in the countryside, can all inform and inspire your Web work. There is no doubt, that stepping away from the computer at the end of a working day will benefit you personally and professionally. Does this therefore mean you should shelve your side projects? Not at all, these are just as important.
Why We Should All Have Side Projects
I love to hire people who have side projects. Take for example Rob Borley 2 who works at Headscape 3. He runs a takeaway ordering site 4, has his own mobile app business 5 and has just launched an iPad app 6. These projects have been hugely beneficial to Headscape. Rob has become our mobile expert, has a good handle on what it takes to launch a successful Web app and puts his entrepreneurial enthusiasm into everything he does for us.
Rob’s side projects such as iTakeout 8 has broadened his experience and made him an indispensable employee.
But side projects don’t just benefit your employer, they benefit your personal career. They provide you with a chance to experiment and learn new techniques that your day job may not allow. They also provide you with the opportunity to widen your skills into new areas and roles. Maybe in your day job you are a designer, but your side project might provide the perfect opportunity to learn some PHP. Finally, side projects allow you to work without constraints. This is something many of us crave and being able to set our own agenda is freeing. However, it is also a challenge. We have to learn how to deliver when there is nobody sitting over our shoulder pushing us to launch.
All of this knowledge from personal projects has a transformative effect that will change your career. It will increase your chance of getting a job and show your employer how valuable you are. It may also convince your employer to create a job that better utilises your skills, as we did for Rob. Rob used to be a project manager, but when we saw his passion and knowledge for mobile we created a new role focusing on that. Of course, this leads us to the obvious question: how can we have time away from the computer if we should also be working on side projects?
Is Hustling The Answer?
If you listen to Gary Vaynerchuk 9 or read Jason Calacanis, you maybe forgiven for thinking the answer is to ‘hustle’; to work harder. They proclaim we should cut out TV, dump the xbox and focus single-mindedly on achieving our goals. There is certainly a grain of truth in this. We often fritter away huge amounts of time, largely unaware of where it is going. We need to be much more conscious about how we are spending our time and ensure we are making a choice about where it goes.
I don’t think working harder is the long term solution, however. We can work hard for short periods of time, but as we have already established this can’t continue indefinitely. We need downtime. We need time lounging in front of the TV or mindlessly shooting our friends in Halo. If we don’t have that we never allow our brain the chance to recuperate and we end up undermining our efficiency. I don’t believe the answer is “work hard, play hard”. I believe the answer is “work smarter”.
We Can Do Everything If We Work Smarter
Working smarter is about three things:
- Combining interests,
- Creating structure,
- Knowing yourself.
Let’s look at each in turn.
A good starting point when it comes to working smarter is to look for commonality between the three aspects of your life (work, life and side projects). You can often achieve a lot by coming up with things that have a positive impact in each of those areas. Take for example the choice of your personal project. If you look at most personal projects out there, they are aimed at a technical audience. We are encouraged to “build for people like us” which has led to an endless plethora of HTML frameworks and WordPress plugins.
Maybe if we got out more there would be a wider range of personal projects and fewer of near identical jQuery plugins 11!
If however we have built up interests outside of the Web, suddenly it opens up a new world of possibilities for side projects.
I wanted to get to know more people at my church. There are so many I have never spoken to. I also wanted to keep my hand in with code (as I don’t get to code a lot anymore), so I decided to build a new church website in my spare time. This involved talking to lots of people from the church, and also gave me the chance to experiment with new ways of coding. What is more, some of the things I learned have been valuable at work too.
Look for ways of combining personal projects with outside activities. Alternatively, identify side projects that could make your working life easier. This kind of crossover lets you get more done. However, by itself that is not enough. We need some structure too.
If we want to get the balance right between personal projects, work and life we need some structure to work in.
For a start take control of your working hours. I know this isn’t easy if you have a slave driver of a boss, but most of us have at least some control over how long we work. You will be surprised, limiting your hours won’t damage your productivity as much as you think. You will probably get as much done in less time. Work tends to expand to take as much time as you are willing to give it. Next, stop fluttering from one thing to another. When you are “having a life” don’t check work email or answer calls. There is a growing expectation we should be available 24/7. Resist it.
One method to keep you focused is the Pomodoro technique 12. This simple approach breaks your day into a series of 30 minute chunks. You work for 25 minutes on a single task free from interruption and then have a 5 minute break. Similar tasks are grouped together so that you spend 25 minutes answering email rather than allowing email to interupt other blocks of work.
The Pomodoro technique 14 is a simple way of staying focus on the task in hand.
Set specific time for working on personal projects and stick to them. Don’t allow that time to expand into your free time. Equally don’t allow work to distract you from your side project. Set boundaries. If you need to, set an alarm for each activity. Nothing will focus your mind on a personal project like having only 30 minutes until your alarm goes off. You will inevitably try and squeeze just one more thing in. These artificial deadlines can be very motivating.
Finally, make sure work, personal projects and recreation all have equal priority in your mind. One way to do this is to use a task manager like Omnifocus 15, Things 16 or Wunderlist 17 to keep all your tasks in one place. Often we have a task list for our work but not for other aspects of our life. This means that work is always prioritised over other activities. It is just as important to have a task to “finish that book” you are reading as “debug IE7”. Providing structure won’t just help with your side projects. It will also help with your sanity.
Remember, the goal here is to have fun on side projects, broaden your horizon with outside activities and recharge with downtime. You therefore must be vigilant in keeping the balance and ensure that all these competing priorities don’t drain you.
Part of the problem is that we spend too much time on activities that we are just not suited to. Its important to recognize your weaknesses and avoid them. If you don’t, you waste time doing things you hate and doing them badly. For example, I just am no good at DIY. I used to waste hours trying to put up shelves and fix plumbing. Because I was trying to do something I was weak at, it would take forever and leave me too tired to do other things.
My solution to this problem was to delegate. I employed people to do my DIY. People that could do it much quicker and to a higher quality than me. How did I pay for this? I did what I was good at, building websites. I would work on the odd freelance site, which I could turn around quickly and enjoy doing. This applies to the side projects we take on too. Learning new skills is one thing, but if it stops being fun because you are just not suited to it, move on. Working on stuff you are not suited to will just leave you demoralized and tired.
Talking of being tired, I would recommend not working on personal projects immediately after getting home from work. Give yourself time to unwind and allow your brain to recover. Equally don’t work on side projects right up until you go to bed. This will play havoc with your sleep patterns and undermine your productivity.
Finally, remember that side projects are meant to be fun. Don’t undertake anything too large because not seeing regular results will undermine your enthusiasm. If you want to work on something large, I suggest working with others. There is certainly no shortage of opportunities 18. Alternatively try breaking up the project into smaller sub-projects each with a functioning deliverable.
Am I Asking For The Impossible?
So there you have it. My attempt to have my cake and eat it. I believe you can have side projects, a life beyond computers and get the day job done. It’s not always easy and if I had to pick I would choose having a life over side projects. However, I believe that personal projects can be fun, good for our careers and also facilitate a life beyond the Web.
So do you agree? Am I being unrealistic? What challenges do you find in striking the balance or what advice do you have for others? These are just my thoughts and I am sure you can add a lot to the discussion in the comments.
Cache them if you can | High Performance Web Sites
“The fastest HTTP request is the one not made.”
I always smile when I hear a web performance speaker say this. I forget who said it first, but I’ve heard it numerous times at conferences and meetups over the past few years. It’s true! Caching is critical for making web pages faster. I’ve written extensively about caching:
- Call to improve browser caching
- (lack of) Caching for iPhone Home Screen Apps
- Redirect caching deep dive
- Mobile cache file sizes
- Improving app cache
- Storager case study: Bing, Google
- App cache & localStorage survey
- HTTP Archive: max-age
Things are getting better – but not quickly enough. The chart below from the HTTP Archive shows that the percentage of resources that are cacheable has increased 10% during the past year (from 42% to 46%). Over that same time the number of requests per page has increased 12% and total transfer size has increased 24% (chart).
Perhaps it’s hard to make progress on caching because the problem doesn’t belong to a single group – responsibility spans website owners, third party content providers, and browser developers. One thing is certain – we have to do a better job when it comes to caching.
I’ve gathered some compelling statistics over the past few weeks that illuminate problems with caching and point to some next steps. Here are the highlights:
- 55% of resources don’t specify a max-age value
- 46% of the resources without any max-age remained unchanged over a 2 week period
- some of the most popular resources on the Web are only cacheable for an hour or two
- 40-60% of daily users to your site don’t have your resources in their cache
- 30% of users have a full cache
- for users with a full cache, the median time to fill their cache is 4 hours of active browsing
Read on to understand the full story.
My kingdom for a max-age header
Many of the caching articles I’ve written address issues such as size & space limitations, bugs with less common HTTP headers, and outdated purging logic. These are critical areas to focus on. But the basic function of caching hinges on websites specifying caching headers for their resources. This is typically done using max-age in the Cache-Control response header. This example specifies that a response can be read from cache for 1 year:
Since you’re reading this blog post you probably already use max-age, but the following chart from the HTTP Archive shows that 55% of resources don’t specify a max-age value. This translates to 45 of the average website’s 81 resources needing a HTTP request even for repeat visits.
Missing max-age != dynamic
Why do 55% of resources have no caching information? Having looked at caching headers across thousands of websites my first guess is lack of awareness – many website owners simply don’t know about the benefits of caching. An alternative explanation might be that many resources are dynamic (JSON, ads, beacons, etc.) and shouldn’t be cached. Which is the bigger cause – lack of awareness or dynamic resources? Luckily we can quantify the dynamicness of these uncacheable resources using data from the HTTP Archive.
The HTTP Archive analyzes the world’s top ~50K web pages on the 1st and 15th of the month and records the HTTP headers for every resource. Using this history it’s possible to go back in time and quantify how many of today’s resources without any max-age value were identical in previous crawls. The data for the chart above (showing 55% of resources with no max-age) was gathered on Feb 15 2012. The chart below shows the percentage of those uncacheable resources that were identical in the previous crawl on Feb 1 2012. We can go back even further and see how many were identical in both the Feb 1 2012 and the Jan 15 2012 crawls. (The HTTP Archive doesn’t save response bodies so the determination of “identical” is based on the resource having the exact same URL, Last-Modified, ETag, and Content-Length.)
46% of the resources without any max-age remained unchanged over a 2 week period. This works out to 21 resources per page that could have been read from cache without any HTTP request but weren’t. Over a 1 month period 38% are unchanged – 17 resources per page.
This is a significant missed opportunity. Here are some popular websites and the number of resources that were unchanged for 1 month but did not specify max-age:
- http://www.toyota.jp/ – 172 resources without max-age & unchanged for 1 month
- http://www.sfgate.com/ – 133
- http://www.hasbro.com/ – 122
- http://www.rakuten.co.jp/ – 113
- http://www.ieee.org/ – 97
- http://www.elmundo.es/ – 80
- http://www.nih.gov/ – 76
- http://www.frys.com/ – 68
- http://www.foodnetwork.com/ – 66
- http://www.irs.gov/ – 58
- http://www.ca.gov/ – 53
- http://www.oracle.com/ – 52
- http://www.blackberry.com/ – 50
Recalling that “the fastest HTTP request is the one not made”, this is a lot of unnecessary HTTP traffic. I can’t prove it, but I strongly believe this is not intentional – it’s just a lack of awareness. The chart below reinforces this belief – it shows the percentage of resources (both cacheable and uncacheable) that remain unchanged starting from Feb 15 2012 and going back for one year.
The percentage of resources that are unchanged is nearly the same when looking at all resources as it is for only uncacheable resources: 44% vs. 46% going back 2 weeks and 35% vs. 38% going back 1 month. Given this similarity in “dynamicness” it’s likely that the absence of max-age has nothing to do with the resources themselves and is instead caused by website owners overlooking this best practice.
3rd party content
If a website owner doesn’t make their resources cacheable, they’re just hurting themselves (and their users). But if a 3rd party content provider doesn’t have good caching behavior it impacts all the websites that embed that content. This is both bad a good. It’s bad in that one uncacheable 3rd party resource can impact multiple sites. The good part is that shifting 3rd party content to adopt good caching practices also has a magnified effect.
So how are we doing when it comes to caching 3rd party content? Below is a list of the top 30 most-used resources according to the HTTP Archive. These are the resources that were used the most across the world’s top 50K web pages. The max-age value (in hours) is also shown.
- http://www.google-analytics.com/ga.js (2 hours)
- (8760 hours)
- http://pagead2.googlesyndication.com/pagead/js/r20120208/r20110914/show_ads_impl.js (336 hours)
- http://pagead2.googlesyndication.com/pagead/render_ads.js (336 hours)
- http://pagead2.googlesyndication.com/pagead/show_ads.js (1 hour)
- https://apis.google.com/_/apps-static/_/js/gapi/gcm_ppb,googleapis_client,plusone/[…] (720 hours)
- http://pagead2.googlesyndication.com/pagead/osd.js (24 hours)
- http://pagead2.googlesyndication.com/pagead/expansion_embed.js (24 hours)
- https://apis.google.com/js/plusone.js (1 hour)
- http://googleads.g.doubleclick.net/pagead/drt/s?safe=on (1 hour)
- (3825 hours)
- http://connect.facebook.net/rsrc.php/v1/yQ/r/f3KaqM7xIBg.swf (164 hours)
- https://ssl.gstatic.com/s2/oz/images/stars/po/Publisher/sprite2.png (8760 hours)
- https://apis.google.com/_/apps-static/_/js/gapi/googleapis_client,iframes_styles[…] (720 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yv/r/ZSM9MGjuEiO.js (8742 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yx/r/qP7Pvs6bhpP.js (8699 hours)
- https://plusone.google.com/_/apps-static/_/ss/plusone/[…] (720 hours)
- http://b.scorecardresearch.com/beacon.js (336 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yx/r/lP_Rtwh3P-S.css (8710 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yA/r/TSn6F7aukNQ.js (8760 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yk/r/Wm4bpxemaRU.js (8702 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yZ/r/TtnIy6IhDUq.js (8699 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yy/r/0wf7ewMoKC2.css (8699 hours)
- http://static.ak.fbcdn.net/rsrc.php/v1/yO/r/H0ip1JFN_jB.js (8760 hours)
- http://platform.twitter.com/widgets/hub.1329256447.html (87659 hours)
- (8699 hours)
- http://platform.twitter.com/widgets.js (1 hour)
- https://plusone.google.com/_/apps-static/_/js/plusone/[…] (720 hours)
- http://pagead2.googlesyndication.com/pagead/js/graphics.js (24 hours)
- http://s0.2mdn.net/879366/flashwrite_1_2.js (720 hours)
There are some interesting patterns.
- simple URLs have short cache times – Some resources have very short cache times, e.g., ga.js (1), show_ads.js (5), and twitter.com/widgets.js (27). Most of the URLs for these resources are very simple (no querystring or URL “fingerprints”) because these resource URLs are part of the snippet that website owners paste into their page. These “bootstrap” resources are given short cache times because there’s no way for the resource URL to be changed if there’s an emergency fix – instead the cached resource has to expire in order for the emergency update to be retrieved.
- long URLs have long cache times – Many 3rd party “bootstrap” scripts dynamically load other resources. These code-generated URLs are typically long and complicated because they contain some unique fingerprinting, e.g., http://pagead2.googlesyndication.com/pagead/js/r20120208/r20110914/show_ads_impl.js (3) and http://platform.twitter.com/widgets/hub.>1329256447.html (25). If there’s an emergency change to one of these resources, the fingerprint in the bootstrap script can be modified so that a new URL is requested. Therefore, these fingerprinted resources can have long cache times because there’s no need to rev them in the case of an emergency fix.
- where’s Facebook’s like button? – Facebook’s like.php and likebox.php are also hugely popular but aren’t in this list because the URL contains a querystring that differs across every website. Those resources have an even more aggressive expiration policy compared to other bootstrap resources – they use
no-cache, no-store, must-revalidate. Once the like[box] bootstrap resource is loaded, it loads the other required resources: lP_Rtwh3P-S.css (19), TSn6F7aukNQ.js (20), etc. Those resources have long URLs and long cache times because they’re generated by code, as explained in the previous bullet.
- short caching resources are often async – The fact that bootstrap scripts have short cache times is good for getting emergency updates, but is bad for performance because they generate many Conditional GET requests on subsequent requests. We all know that scripts block pages from loading, so these Conditional GET requests can have a significant impact on the user experience. Luckily, some 3rd party content providers are aware of this and offer async snippets for loading these bootstrap scripts mitigating the impact of their short cache times. This is true for ga.js (1), plusone.js (9), twitter.com/widgets.js (27), and Facebook’s like[box].php.
These extremely popular 3rd party snippets are in pretty good shape, but as we get out of the top widgets we quickly find that these good caching patterns degrade. In addition, more 3rd party providers need to support async snippets.
Cache sizes are too small
In January 2007 Tenni Theurer and I ran an experiment at Yahoo! to estimate how many users had a primed cache. The methodology was to embed a transparent 1×1 image in the page with an expiration date in the past. If users had the expired image in their cache the browser would issue a Conditional GET request and receive a 304 response (primed cache). Otherwise they’d get a 200 response (empty cache). I was surprised to see that 40-60% of daily users to the site didn’t have the site’s resources in their cache and 20% of page views were done without the site’s resources in the cache.
Numerous factors contribute to this high rate of unique users missing the site’s resources in their cache, but I believe the primary reason is small cache sizes. Browsers have increased the size of their caches since this experiment was run, but not enough. It’s hard to test browser cache size. Blaze.io’s article Understanding Mobile Cache Sizes shows results from their testing. Here are the max cache sizes I found for browsers on my MacBook Air. (Some browsers set the cache size based on available disk space, so let me mention that my drive is 250 GB and has 54 GB available.) I did some testing and searching to find max cache sizes for my mobile devices and IE.
- Chrome: 320 MB
- Internet Explorer 9: 250 MB
- Firefox 11: 830 MB (shown in about:cache)
- Opera 11: 20 MB (shown in Preferences | Advanced | History)
- iPhone 4, iOS 5.1: 30-35 MB (based on testing)
- Galaxy Nexus: 18 MB (based on testing)
I’m surprised that Firefox 11 has such a large cache size – that’s almost close to what I want. All the others are (way) too small. 18-35 MB on my mobile devices?! I have seven movies on my iPhone – I’d gladly trade Iron Man 2 (1.82 GB) for more cache space.
Caching in the real world
In order to justify increasing browser cache sizes we need some statistics on how many real users overflow their cache. This topic came up at last month’s Velocity Summit where we had representatives from Chrome, Internet Explorer, Firefox, Opera, and Silk. (Safari was invited but didn’t show up.) Will Chan from the Chrome team (working on SPDY) followed-up with this post on Chromium cache metrics from Windows Chrome. These are the most informative real user cache statistics I’ve ever seen. I strongly encourage you to read his article.
Some of the takeaways include:
- ~30% of users have a full cache (capped at 320 MB)
- for users with a full cache, the median time to fill their cache is 4 hours of active browsing (20 hours of clock time)
- 7% of users clear their cache at least once per week
- 19% of users experience “fatal cache corruption” at least once per week thus clearing their cache
The last stat about cache corruption is interesting – I appreciate the honesty. The IE 9 team experienced something similar. In IE 7&8 the cache was capped at 50 MB based on tests showing increasing the cache size didn’t improve the cache hit rate. They revisited this surprising result in IE9 and found that larger cache sizes actually did improve the cache hit rate:
In IE9, we took a much closer look at our cache behaviors to better understand our surprising finding that larger caches were rarely improving our hit rate. We found a number of functional problems related to what IE treats as cacheable and how the cache cleanup algorithm works. After fixing these issues, we found larger cache sizes were again resulting in better hit rates, and as a result, we’ve changed our default cache size algorithm to provide a larger default cache.
Will mentions that Chrome’s 320 MB cap should be revisited. 30% seems like a low percentage for full caches, but could be accounted for by users that aren’t very active and active users that only visit a small number of websites (for example, just Gmail and Facebook). If possible I’d like to see these full cache statistics correlated with activity. It’s likely that user who account for the biggest percentage of web visits are more likely to have a full cache, and thus experience slower page load times.
The data presented here suggest a few areas to focus on:
Website owners need to increase their use of a Cache-Control max-age, and the max-age times need to be longer. 38% of resources were unchanged over a 1 month period, and yet only 11% of resources have a max-age value that high. Most resources, even if they change, can be refreshed by including a fingerprint in the URL specified in the HTML document. Only bootstrap scripts from 3rd parties should have short cache times (hours). Truly dynamic responses (JSON, etc.) should specify must-revalidate. A year from now rather than seeing 55% of resources without any max-age value we should see 55% cacheable for a month or more.
3rd party content providers need wider adoption of the caching and async behavior shown by the top Google, Twitter, and Facebook snippets.
Browser developers stand to bring the biggest improvements to caching. Increasing cache sizes is a likely win, especially for mobile devices. Data correlating cache sizes and user activity is needed. More intelligence around purging algorithms, such as IE 9′s prioritization based on mime type, will help when the cache fills up. More focus on personalization (what are the sites I visit most often?) would also create a faster user experience when users go to their favorite websites.
It’s great that the number of resources with caching headers grew 10% over the last year, but that just isn’t enough progress. We should really expect to double the number of resources that can be read from cache over the coming year. Just think about all those HTTP requests that can be avoided!
A Better Way To Program
This video will change the way you think about programming. The argument is clear and impressive - it suggest that we really are building programs with one hand tied behind our backs. After you have watched the video you will want the tools demonstrated.
We often focus on programming languages and think that we need a better language to program better. Bret Victor gave a talk that demonstrated that this is probably only a tiny part of the problem. The key is probably interactivity. Don’t wait for a compile to complete to see what effect your code has on things - if you can see it in real time then programming becomes much easier. Currently we are programming with one arm tied behind our backs because the tools that we use separate us from what we write and what happens.
Interactivity makes code understandable.
Moving on, the next idea is that instead of reading code and understanding it, seeing what the code does is understanding it. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code?
All of this is explained and demonstrated in this long (1 hour) video. It also has the problem that it starts very slowly and is occasionally self indulgent. But, as they say, if you watch just one video this year make it this one.
It eventually gets going and it isn’t only about game programming at about 18 mins in you will find the same ideas applied to more abstract coding and even to other engineering disciplines.
There are some soco-political ideas explained along the way - feel free to disagree with them - but don’t ignore the important technical points being made.
The talk was given at CUSEC 2012 (The Canadian University Software Engineering Conference)
Bret Victor is clearly someone to keep an eye on. Have a look at his web site for even more really interesting ideas.
Advice From An Old Programmer — Learn Python The Hard Way, 2nd Edition
You’ve finished this book and have decided to continue with programming. Maybe it will be a career for you, or maybe it will be a hobby. You’ll need some advice to make sure you continue on the right path, and get the most enjoyment out of your newly chosen activity.
I’ve been programming for a very long time. So long that it’s incredibly boring to me. At the time that I wrote this book, I knew about 20 programming languages and could learn new ones in about a day to a week depending on how weird they were. Eventually though this just became boring and couldn’t hold my interest anymore. This doesn’t mean I think programming is boring, or that you will think it’s boring, only that I find it uninteresting at this point in my journey.
What I discovered after this journey of learning is that it’s not the languages that matter but what you do with them. Actually, I always knew that, but I’d get distracted by the languages and forget it periodically. Now I never forget it, and neither should you.
Which programming language you learn and use doesn’t matter. Do not get sucked into the religion surrounding programming languages as that will only blind you to their true purpose of being your tool for doing interesting things.
Programming as an intellectual activity is the only art form that allows you to create interactive art. You can create projects that other people can play with, and you can talk to them indirectly. No other art form is quite this interactive. Movies flow to the audience in one direction. Paintings do not move. Code goes both ways.
Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.
People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.
Of course, all of this advice is pointless. If you liked learning to write software with this book, you should try to use it to improve your life any way you can. Go out and explore this weird wonderful new intellectual pursuit that barely anyone in the last 50 years has been able to explore. Might as well enjoy it while you can.
Finally, I’ll say that learning to create software changes you and makes you different. Not better or worse, just different. You may find that people treat you harshly because you can create software, maybe using words like “nerd”. Maybe you’ll find that because you can dissect their logic that they hate arguing with you. You may even find that simply knowing how a computer works makes you annoying and weird to them.
To this I have just one piece of advice: they can go to hell. The world needs more weird people who know how things work and who love to figure it all out. When they treat you like this, just remember that this is your journey, not theirs. Being different is not a crime, and people who tell you it is are just jealous that you’ve picked up a skill they never in their wildest dreams could acquire.
You can code. They cannot. That is pretty damn cool.
The Greatness of Git
When Linus Torvalds says he is going to work on a side project he doesn’t think small and he doesn’t work slowly.
When he created “Git,” the software source control and collaboration system that runs Linux kernel development, he started writing code on a Sunday (April 3, 2005) and emerged just a few days later with a new revision control system that today is regarded as one of the best pieces of software ever written (second, at least, to Linux, of course).
Andrew Morton said when introducing Linus to speak about Git to an audience at Google, Git is “expressly designed to make you feel less intelligent than you thought you were.”
Software Freedom Law Center Founder and co-author of the GPL Eben Moglen said during a keynote panel at LinuxCon last August: “Linus was presented with a nasty weekend once upon a time and out of it came Git. Another brilliant achievement, you understand. A work of superb design that is going to change the software industry and the world…because one man had one itch one weekend that was really biting, and he had to invent something. And he’s a brilliantly inventive man and scored another hole in one.”
Git had to be great in order to support the unmatched rate of development that Linux requires. Today, the Linux community applies more than five patches per hour to the kernel and to date has written more than 15 million lines of code. The sheer size of Linux development has made the project one from which others have borrowed both collaborative development lessons and and tools - like Git. Today Git is used by the Linux community, as well as developers working on projects that range from Ruby on Rails to Android to Perl and Eclipse, and many more.
The popularity of Git is also resulting in it becoming part of the technology vernacular, with businesses based on Git flourishing.
Consider GitHub. This is an amazing code repository that uses the Git revision control system and has become one of the most popular places to host and collaborate on software. This service is being used by more than a million people to store over two million code repositories.
Could Git also be getting into publishing? Maybe. Wired.com reporter Bob McMillan recently took GitHub for spin, publishing his story about the repository in the repository.
“GitHub was originally designed for software developers…But nowadays, it’s also being used to oversee stuff outside the programming world, including DNA data and Senate bills that may turn into laws and all sorts of other stuff you can put into a text file, such as, well, a Wired article.”
He might have gotten a little more than he bargained for with all the collaboration, but his experiment demonstrates its power.
GitHire is another new online application and service that builds upon Git for finding the world’s best programmers. GitHire will crawl git repositories, find and rank programmers based on their code and reputation and provide employers with a short list of the world’s best talent most relevant to their needs. If you’re a software developer and doubted it before, code is most definitely the new resume.
There are a number of other examples, as well as native Git for Windows, Git implementations in other languages, tutorial businesses based on Git, and more.
The measure of truly great software development is use. When others use it and build new projects and/or businesses from it, you know it’s truly great. This is the essence of Linux and open source software development. By writing the best code and sharing it with the world, everything gets better, faster, and there becomes even more new ways to collaborate and share.
Letting Hackers Compete, Facebook Eyes New Talent - Technology Review
As it readies for an IPO, the social network puts engineers, not HR, in charge of a global search for young programmers.
Late this January, some 75,000 people around the planet sat in front of their computers and pondered how to make anagrams from a bowl of alphabet soup. They were participants in the Hacker Cup, an international programming battle that Facebook organized to help it find the brightest young software engineers before competitors like Google do.
After three more rounds of brain teasers, Facebook will fly the top 25 coders to its head office in Menlo Park, for an adrenaline-soaked finale this March that will award the champion $5,000. In return, Facebook gets a shot at hiring the stars discovered along the way.
"I’m in an all-out land grab for talent," says Jocelyn Goldfein, Facebook’s director of engineering and most senior woman on its technical staff. The social network builds almost all of its own software, and young, smart coders are the company’s most critical asset as it manages the comments, photos, and "likes" of more than 800 million users. "We are in uncharted waters every day," says Goldfein. "What’s great about young people is that they don’t know what’s impossible, so they try crazy things and lead us to be the first to make them work."
Google and many other companies are chasing the same code slingers as Facebook, causing salaries to shoot up. Average salaries for technology professionals in Silicon Valley rose 5.2 percent in 2011 to break the $100,000 barrier, while pay rose just 2 percent nationally, according to a recent salary survey. One graduating college senior, posting anonymously on the Web, claimed that Facebook offered a $100,000 salary, a $50,000 signing bonus, and $120,000 in stock options. Facebook declined to comment.
According to the prospectus filed in connection with Facebook’s planned initial public offering of stock, the company’s headcount jumped from 2,127 to 3,200 full-time employees in 2011. Unlike some large companies, Facebook does not leave recruiting programmers to its human resources department. “The HR departments are in one building and engineering is in another,” says Goldfein. “Recruitment sits with us.”
The best hiring strategies simultaneously test skills and advertise Facebook’s internal culture, which Goldfein says values “clever workarounds that shortcut complexity.” In addition to the Hacker Cup and a series of similar “Camp Hackathon” contests that tour U.S. colleges, there’s a set of fiendishly tricky online puzzles that Facebook maintains online. Solving them with sufficient style can net a phone call from a recruiter. “This is a way to say that if you’re brilliant we don’t care where you worked and if you have a college degree,” says Goldfein.
All that reinforces Facebook’s status as a cool place to work. On Glassdoor, a job information site, Facebook leads technology companies in a ranking by employees of the best workplaces. In another survey that asked workers under 40 where they would most like to get a job, Facebook placed third, behind Google and Apple. Increasingly, other large technology companies aren’t even the stiffest competition for talent, says Rusty Rueff, a board member at Glassdoor. Many talented young people in Silicon Valley are finding that investors and startup accelerator programs will back them to go it alone and found their own companies.
One consequence is that technology companies are buying startups simply as a way to hire their twentysomething founders. Another is that companies aren’t hiring for specific jobs. Facebook puts new hires through a six-week boot camp where they rotate through projects, choosing one that suits them best. “Facebook and other companies doing this are saying, ‘You can work for us and still be entrepreneurial and create your own thing,’” Rueff says.
Although the coder competition looks like a fun and free-wheeling meritocracy, it also reflects problems in the U.S. education system. Very few women participate, and most of the winners are from overseas. “Facebook [is] aggressively going to other countries because there aren’t enough skilled people in the U.S.,” says Goldfein.
Of the 2011 Hacker Cup winners, all three were foreign men 26 or younger. Facebook hired the second-place finisher. The first-place winner was already employed by Google.
Programmer nails real-time rendering of ultra-realistic human skin
Graphics researcher Jorge Jimenez has cracked the problem of rendering what he calls “ultra realistic skin” in real-time with consumer-level computer and graphics hardware. It’s a breakthrough made possible by the process of separable subsurface scattering (SSS) which quickly renders the translucent properties of skin and its effect on light in two post-processing passes. The code is based wholly on original research using DirectX 10. Jimenez describes the achievement as the result of hours of “research, desperation, excitement, happiness, pride, sadness and extreme dedication.”
Though Jimenez has released a high definition video of the effect, he’s gone two better by releasing downloadable executable demo files that will run on a home PC provided it has a powerful enough GPU, as well as making the source code available on GitHub.
Though the code runs on consumer-level hardware, it’ll take more than an everyday PC to run well. On his GeForce GTX 580-equipped machine Jimenez was able to run the demo at a mean of 112.5 frames per second, varying between 80 and 160 FPS. It’s worth bearing in mind that that’s a graphics card that costs about US$470 from Amazon.
And it may be too early to salivate at the prospect of a Call of Duty, Mass Effect or Elder Scrolls sequel with such realistic characters. The demo consists of a single, stationary head and shoulders - literally a world apart from the dynamic, character-filled environments of modern video games. If the principles are applied to games in the near future, it may be that the results are significantly watered down simply because the graphics processors have a lot more on their plate (unless Attack of the Gigantic Mutant Killer Head from Venus is released any time soon).
And SSS alone is not sufficient for rendering realistic character models. “Efforts towards rendering ultra realistic skin are futile if they are not coupled with HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps (which are lacking on my demo) and a high quality antialiasing solution,” writes Jimenez on his blog. “If you fail on any of them, the illusion of looking at a real human will be broken.” The task of rendering realistic skin is especially challenging close up at 1080p, he adds.
It’s an impressive achievement, and one you can observe in all its HD glory in the video below. Of course, if you’ve got the hardware, you can run the demo for yourself.
One word, “WOW.”
The Quickest Way to Blog with Jekyll.
New to blogging with Jekyll? Read the introduction.Jekyll-Bootstrap ships with a complete pre-built Jekyll directory structure for blogging, modular theming, plug-and-play commenting, analytics, new post and page generators, and coded page-stubs to get you rolling.
Without Jekyll-Bootstrap, you’d have to configure every single page of your blog. Jekyll-bootstrap takes you from 0 to hosted blog in 3 minutes, really!
Free and Easy Hosting via GitHub Pages
Jekyll-bootstrap is 100% compatible with deploying to GitHub. Just push your repository to a valid GitHub Pages endpoint and GitHub hosts your website <3.
Progressive, Unified Development
Ensuring your Jekyll blog is always compatible with GitHub Pages means development can move the most users forward. This helps improve the current horizontal and highly segmented Jekyll ecosystem. Look forward to more and better features that simply drop in.
Zero to Hosted Jekyll Blog in 3 Minutes
1 - Create a New Repository
Go to your Github Dashboard and create a new repository named USERNAME.github.com
2 - Install Jekyll-Bootstrap$ git clone https://github.com/plusjade/jekyll-bootstrap.git USERNAME.github.com $ cd USERNAME.github.com $ git remote set-url origin email@example.com:USERNAME/USERNAME.github.com.git $ git push origin master
3 - Profit
After GitHub has a couple minutes to do its magic your blog will be publicly available at http://USERNAME.github.com
*Already have your blog on GitHub?
I’ll assume you have the Jekyll gem installed on your local machine. Run Jekyll-Bootstrap-Core locally to see what all the fuss is about:
See it in action at http://localhost:4000.