A Better Way To Program
This video will change the way you think about programming. The argument is clear and impressive - it suggest that we really are building programs with one hand tied behind our backs. After you have watched the video you will want the tools demonstrated.
We often focus on programming languages and think that we need a better language to program better. Bret Victor gave a talk that demonstrated that this is probably only a tiny part of the problem. The key is probably interactivity. Don’t wait for a compile to complete to see what effect your code has on things - if you can see it in real time then programming becomes much easier. Currently we are programming with one arm tied behind our backs because the tools that we use separate us from what we write and what happens.
Interactivity makes code understandable.
Moving on, the next idea is that instead of reading code and understanding it, seeing what the code does is understanding it. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code?
All of this is explained and demonstrated in this long (1 hour) video. It also has the problem that it starts very slowly and is occasionally self indulgent. But, as they say, if you watch just one video this year make it this one.
It eventually gets going and it isn’t only about game programming at about 18 mins in you will find the same ideas applied to more abstract coding and even to other engineering disciplines.
There are some soco-political ideas explained along the way - feel free to disagree with them - but don’t ignore the important technical points being made.
The talk was given at CUSEC 2012 (The Canadian University Software Engineering Conference)
Bret Victor is clearly someone to keep an eye on. Have a look at his web site for even more really interesting ideas.
Advice From An Old Programmer — Learn Python The Hard Way, 2nd Edition
You’ve finished this book and have decided to continue with programming. Maybe it will be a career for you, or maybe it will be a hobby. You’ll need some advice to make sure you continue on the right path, and get the most enjoyment out of your newly chosen activity.
I’ve been programming for a very long time. So long that it’s incredibly boring to me. At the time that I wrote this book, I knew about 20 programming languages and could learn new ones in about a day to a week depending on how weird they were. Eventually though this just became boring and couldn’t hold my interest anymore. This doesn’t mean I think programming is boring, or that you will think it’s boring, only that I find it uninteresting at this point in my journey.
What I discovered after this journey of learning is that it’s not the languages that matter but what you do with them. Actually, I always knew that, but I’d get distracted by the languages and forget it periodically. Now I never forget it, and neither should you.
Which programming language you learn and use doesn’t matter. Do not get sucked into the religion surrounding programming languages as that will only blind you to their true purpose of being your tool for doing interesting things.
Programming as an intellectual activity is the only art form that allows you to create interactive art. You can create projects that other people can play with, and you can talk to them indirectly. No other art form is quite this interactive. Movies flow to the audience in one direction. Paintings do not move. Code goes both ways.
Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.
People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.
Of course, all of this advice is pointless. If you liked learning to write software with this book, you should try to use it to improve your life any way you can. Go out and explore this weird wonderful new intellectual pursuit that barely anyone in the last 50 years has been able to explore. Might as well enjoy it while you can.
Finally, I’ll say that learning to create software changes you and makes you different. Not better or worse, just different. You may find that people treat you harshly because you can create software, maybe using words like “nerd”. Maybe you’ll find that because you can dissect their logic that they hate arguing with you. You may even find that simply knowing how a computer works makes you annoying and weird to them.
To this I have just one piece of advice: they can go to hell. The world needs more weird people who know how things work and who love to figure it all out. When they treat you like this, just remember that this is your journey, not theirs. Being different is not a crime, and people who tell you it is are just jealous that you’ve picked up a skill they never in their wildest dreams could acquire.
You can code. They cannot. That is pretty damn cool.
The Greatness of Git
When Linus Torvalds says he is going to work on a side project he doesn’t think small and he doesn’t work slowly.
When he created “Git,” the software source control and collaboration system that runs Linux kernel development, he started writing code on a Sunday (April 3, 2005) and emerged just a few days later with a new revision control system that today is regarded as one of the best pieces of software ever written (second, at least, to Linux, of course).
Andrew Morton said when introducing Linus to speak about Git to an audience at Google, Git is “expressly designed to make you feel less intelligent than you thought you were.”
Software Freedom Law Center Founder and co-author of the GPL Eben Moglen said during a keynote panel at LinuxCon last August: “Linus was presented with a nasty weekend once upon a time and out of it came Git. Another brilliant achievement, you understand. A work of superb design that is going to change the software industry and the world…because one man had one itch one weekend that was really biting, and he had to invent something. And he’s a brilliantly inventive man and scored another hole in one.”
Git had to be great in order to support the unmatched rate of development that Linux requires. Today, the Linux community applies more than five patches per hour to the kernel and to date has written more than 15 million lines of code. The sheer size of Linux development has made the project one from which others have borrowed both collaborative development lessons and and tools - like Git. Today Git is used by the Linux community, as well as developers working on projects that range from Ruby on Rails to Android to Perl and Eclipse, and many more.
The popularity of Git is also resulting in it becoming part of the technology vernacular, with businesses based on Git flourishing.
Consider GitHub. This is an amazing code repository that uses the Git revision control system and has become one of the most popular places to host and collaborate on software. This service is being used by more than a million people to store over two million code repositories.
Could Git also be getting into publishing? Maybe. Wired.com reporter Bob McMillan recently took GitHub for spin, publishing his story about the repository in the repository.
“GitHub was originally designed for software developers…But nowadays, it’s also being used to oversee stuff outside the programming world, including DNA data and Senate bills that may turn into laws and all sorts of other stuff you can put into a text file, such as, well, a Wired article.”
He might have gotten a little more than he bargained for with all the collaboration, but his experiment demonstrates its power.
GitHire is another new online application and service that builds upon Git for finding the world’s best programmers. GitHire will crawl git repositories, find and rank programmers based on their code and reputation and provide employers with a short list of the world’s best talent most relevant to their needs. If you’re a software developer and doubted it before, code is most definitely the new resume.
There are a number of other examples, as well as native Git for Windows, Git implementations in other languages, tutorial businesses based on Git, and more.
The measure of truly great software development is use. When others use it and build new projects and/or businesses from it, you know it’s truly great. This is the essence of Linux and open source software development. By writing the best code and sharing it with the world, everything gets better, faster, and there becomes even more new ways to collaborate and share.
Letting Hackers Compete, Facebook Eyes New Talent - Technology Review
As it readies for an IPO, the social network puts engineers, not HR, in charge of a global search for young programmers.
Late this January, some 75,000 people around the planet sat in front of their computers and pondered how to make anagrams from a bowl of alphabet soup. They were participants in the Hacker Cup, an international programming battle that Facebook organized to help it find the brightest young software engineers before competitors like Google do.
After three more rounds of brain teasers, Facebook will fly the top 25 coders to its head office in Menlo Park, for an adrenaline-soaked finale this March that will award the champion $5,000. In return, Facebook gets a shot at hiring the stars discovered along the way.
“I’m in an all-out land grab for talent,” says Jocelyn Goldfein, Facebook’s director of engineering and most senior woman on its technical staff. The social network builds almost all of its own software, and young, smart coders are the company’s most critical asset as it manages the comments, photos, and “likes” of more than 800 million users. “We are in uncharted waters every day,” says Goldfein. “What’s great about young people is that they don’t know what’s impossible, so they try crazy things and lead us to be the first to make them work.”
Google and many other companies are chasing the same code slingers as Facebook, causing salaries to shoot up. Average salaries for technology professionals in Silicon Valley rose 5.2 percent in 2011 to break the $100,000 barrier, while pay rose just 2 percent nationally, according to a recent salary survey. One graduating college senior, posting anonymously on the Web, claimed that Facebook offered a $100,000 salary, a $50,000 signing bonus, and $120,000 in stock options. Facebook declined to comment.
According to the prospectus filed in connection with Facebook’s planned initial public offering of stock, the company’s headcount jumped from 2,127 to 3,200 full-time employees in 2011. Unlike some large companies, Facebook does not leave recruiting programmers to its human resources department. “The HR departments are in one building and engineering is in another,” says Goldfein. “Recruitment sits with us.”
The best hiring strategies simultaneously test skills and advertise Facebook’s internal culture, which Goldfein says values “clever workarounds that shortcut complexity.” In addition to the Hacker Cup and a series of similar “Camp Hackathon” contests that tour U.S. colleges, there’s a set of fiendishly tricky online puzzles that Facebook maintains online. Solving them with sufficient style can net a phone call from a recruiter. “This is a way to say that if you’re brilliant we don’t care where you worked and if you have a college degree,” says Goldfein.
All that reinforces Facebook’s status as a cool place to work. On Glassdoor, a job information site, Facebook leads technology companies in a ranking by employees of the best workplaces. In another survey that asked workers under 40 where they would most like to get a job, Facebook placed third, behind Google and Apple. Increasingly, other large technology companies aren’t even the stiffest competition for talent, says Rusty Rueff, a board member at Glassdoor. Many talented young people in Silicon Valley are finding that investors and startup accelerator programs will back them to go it alone and found their own companies.
One consequence is that technology companies are buying startups simply as a way to hire their twentysomething founders. Another is that companies aren’t hiring for specific jobs. Facebook puts new hires through a six-week boot camp where they rotate through projects, choosing one that suits them best. “Facebook and other companies doing this are saying, ‘You can work for us and still be entrepreneurial and create your own thing,’” Rueff says.
Although the coder competition looks like a fun and free-wheeling meritocracy, it also reflects problems in the U.S. education system. Very few women participate, and most of the winners are from overseas. “Facebook [is] aggressively going to other countries because there aren’t enough skilled people in the U.S.,” says Goldfein.
Of the 2011 Hacker Cup winners, all three were foreign men 26 or younger. Facebook hired the second-place finisher. The first-place winner was already employed by Google.
Programmer nails real-time rendering of ultra-realistic human skin
Graphics researcher Jorge Jimenez has cracked the problem of rendering what he calls “ultra realistic skin” in real-time with consumer-level computer and graphics hardware. It’s a breakthrough made possible by the process of separable subsurface scattering (SSS) which quickly renders the translucent properties of skin and its effect on light in two post-processing passes. The code is based wholly on original research using DirectX 10. Jimenez describes the achievement as the result of hours of “research, desperation, excitement, happiness, pride, sadness and extreme dedication.”
Though Jimenez has released a high definition video of the effect, he’s gone two better by releasing downloadable executable demo files that will run on a home PC provided it has a powerful enough GPU, as well as making the source code available on GitHub.
Though the code runs on consumer-level hardware, it’ll take more than an everyday PC to run well. On his GeForce GTX 580-equipped machine Jimenez was able to run the demo at a mean of 112.5 frames per second, varying between 80 and 160 FPS. It’s worth bearing in mind that that’s a graphics card that costs about US$470 from Amazon.
And it may be too early to salivate at the prospect of a Call of Duty, Mass Effect or Elder Scrolls sequel with such realistic characters. The demo consists of a single, stationary head and shoulders - literally a world apart from the dynamic, character-filled environments of modern video games. If the principles are applied to games in the near future, it may be that the results are significantly watered down simply because the graphics processors have a lot more on their plate (unless Attack of the Gigantic Mutant Killer Head from Venus is released any time soon).
And SSS alone is not sufficient for rendering realistic character models. “Efforts towards rendering ultra realistic skin are futile if they are not coupled with HDR, high quality bloom, depth of field, film grain, tone mapping, ultra high quality models, parametrization maps, high quality shadow maps (which are lacking on my demo) and a high quality antialiasing solution,” writes Jimenez on his blog. “If you fail on any of them, the illusion of looking at a real human will be broken.” The task of rendering realistic skin is especially challenging close up at 1080p, he adds.
It’s an impressive achievement, and one you can observe in all its HD glory in the video below. Of course, if you’ve got the hardware, you can run the demo for yourself.
One word, “WOW.”
The Quickest Way to Blog with Jekyll.
New to blogging with Jekyll? Read the introduction.Jekyll-Bootstrap ships with a complete pre-built Jekyll directory structure for blogging, modular theming, plug-and-play commenting, analytics, new post and page generators, and coded page-stubs to get you rolling.
Without Jekyll-Bootstrap, you’d have to configure every single page of your blog. Jekyll-bootstrap takes you from 0 to hosted blog in 3 minutes, really!
Free and Easy Hosting via GitHub Pages
Jekyll-bootstrap is 100% compatible with deploying to GitHub. Just push your repository to a valid GitHub Pages endpoint and GitHub hosts your website <3.
Progressive, Unified Development
Ensuring your Jekyll blog is always compatible with GitHub Pages means development can move the most users forward. This helps improve the current horizontal and highly segmented Jekyll ecosystem. Look forward to more and better features that simply drop in.
Zero to Hosted Jekyll Blog in 3 Minutes
1 - Create a New Repository
Go to your Github Dashboard and create a new repository named USERNAME.github.com
2 - Install Jekyll-Bootstrap$ git clone https://github.com/plusjade/jekyll-bootstrap.git USERNAME.github.com $ cd USERNAME.github.com $ git remote set-url origin firstname.lastname@example.org:USERNAME/USERNAME.github.com.git $ git push origin master
3 - Profit
After GitHub has a couple minutes to do its magic your blog will be publicly available at http://USERNAME.github.com
*Already have your blog on GitHub?
I’ll assume you have the Jekyll gem installed on your local machine. Run Jekyll-Bootstrap-Core locally to see what all the fuss is about:
See it in action at http://localhost:4000.
Prof Aims to Rebuild Google With Stuff In Desk Drawer | Wired Enterprise
Dave Anderson looked into a desk drawer filled with tiny computers. Each was no bigger than a hardback novel, and their chips ran no faster than 600 MHz. Built by a little-known company called Soekris Engineering, they were meant to be wireless access points or network firewalls, and that’s how Anderson — a computer science professor at Carnegie Mellon — used them in a previous research project. But that project was over, and he thought: “They’ve got to be good for something else.”
At first, he decided these tiny machines could be super-low-power DNS (domain name system) servers — servers that take site names and translate them to a numeric internet address — and he asked some Ph.D. students to make it happen. “I wondered,” he remembers, “if we could do this on a wimpy platform that consumed only about 5 watts of power rather than 500.” Those students proved they could. But they also told Anderson he was thinking too small.
After tinkering with his tiny machines, they realized that if you strung a bunch of them together, you could run a massive application each machine could never execute on its own. The trick was to split the application’s duties into tiny pieces and spread them evenly across the network. “They were right,” Anderson says of his students. “We could use these boxes to run high-performance large-scale key-value stores — the kind of [databases] you would run behind the scenes at Facebook or Twitter. And the rest is publication history.”
The year was 2008, and as it turns out, Anderson and his students were at the forefront of a movement that could reinvent the way the world uses its servers, making them significantly more efficient — and cramming them into much smaller spaces. Startups such as SeaMicro and Calxeda are now building servers using the hundreds of low-power processor cores originally designed for cell phones and other mobile devices. HP is set to resell Calxeda machines as it explores similar systems with a research effort called Project Moonshot. And the giants of the internet — including Google, Amazon, and Facebook — are seriously considering the possibility of running their operations atop the sort of “wimpy” processors Anderson found in his desk drawer.
“Wimpy” is the official term. Now into its fourth year, Anderson’s project is known as the Fast Array of Wimpy Nodes, or FAWN. He regrets the name. “No manufacturer wants to advertise their products as wimpy,” he says. But the name certainly suits his research, and despite the negative connotation, the project has attracted the interest of the largest chip maker on earth. Intel sponsors Anderson’s research, and he works closely with researchers at the Pittsburgh lab Intel runs on the Carnegie Mellon campus.
The rub is that the Fast Array of Wimpy Nodes isn’t always fast. In some cases, software must be significantly rewritten to achieve high speeds on a collection of low-power processors, and other applications aren’t suited to the setup at all.
Like so many others across the server world, Intel is approaching the wimpy-node idea with skepticism — and not just because it makes an awful lot of money selling the far-from-wimpy processors that power today’s servers. “Intel is trying to walk a difficult line,” Anderson says. “Yes, a lot of their profit is from big brawny processors — and they don’t want to undercut that. But they also don’t want their customers to get inappropriately excited about wimpy processors and then be disappointed.”
Dave Anderson says that skepticism is healthy. But only up to a point. His research shows that many applications can be far more efficient on wimpy nodes, including not only ordinary web serving but, yes, large databases. “Intel realizes this too,” he says. “And they don’t want to get blindsided.”
Google Slaps Wimps
Google is a search and advertising company. But it’s also the company the world looks to for the latest thinking on hardware and software infrastructure. Google uses custom-built software platforms to distribute enormous applications across a worldwide network of custom-built servers, and this do-it-yourself approach to parallel computing has inspired everything from Hadoop, the increasingly popular open source platform for crunching data with vast server clusters, to Facebook’s Open Compute Project, a collective effort to improve the efficiency of the world’s servers.
So when Urs Hölzle, the man who oversees Google’s infrastructure, weighed in on the wimpy node idea, the server world sat up and noticed. If anyone believes in wimpy nodes, the world assumed, it’s Hölzle. But with a paper published in chip design magazine IEEE Micro, Google’s parallel computing guru actually took the hype down a notch. “Brawny cores still beat wimpy cores, most of the time,” read the paper’s title.
The problem, Hölzle said, was something called Amdahl’s law: If you parallelize only part of a system, there’s a limit to the performance improvement. “Slower but energy efficient ‘wimpy’ cores only win for general workloads if their single-core speed is reasonably close to that of mid-range ‘brawny’ cores,” he wrote. “In many corners of the real world, [wimpy core systems] are prohibited by law — Amdahl’s law.”
In short, he argued that moving information between so many cores can bog down the entire system. But he also complained that if you install a wimpy node array, you may have to rewrite your applications. “Cost numbers used by wimpy-core evangelists always exclude software development costs,” he said. “Unfortunately, wimpy-core systems can require applications to be explicitly parallelized or otherwise optimized for acceptable performance.”
Many “wimpy-core evangelists” took issue with Hölzle’s paper. But Dave Anderson calls it “reasonably balanced,” and he urges readers to consider the source. “I think you should also realize that this is written from the perspective of a company that doesn’t want to change too much of its software,” he says.
Anderson’s research has shown that some applications do require a significant rewrite, including virus scanning and other tasks that look for patterns in large amounts of data. “We actually locked our entire cluster because the [pattern recognition] algorithms we used allocated more memory than our individual cores had,” he remembers. “If you’re using wimpy cores, they probably don’t have as much memory per processor as the brawny cores. This can be a big limiter.”
But not all applications use as much memory. And in some cases, software can run on a wimpy core system with relatively few changes. Mozilla is using SeaMircro servers — based on Intel’s ATOM mobile processor — to facilitate downloads of its Firefox browser, saying the cluster draws about one fifth the power and uses about a fourth of the space of its previous cluster. Anderson points to this as an example of a wimpy core system that can be rolled out with relatively little effort.
Anderson’s stance echos that of Intel. This summer, when we asked Jason Waxman — the general manager of high-density computing in Intel’s data center group — about the company’s stance on wimpy nodes, he said that many applications — including those run by Google — are unsuited to the setup, but that others — including basic web serving — work just fine.
In other words, Google’s needs may not be your needs. Even if your applications are similar to Google’s, you may be more willing to rewrite your code. “I’m a researcher,” Anderson says. “I’m completely happy — and actually enjoy — reinventing the software. But there are others who would never ever want to rewrite their software. The question should be: As a company, where do you fit on that spectrum?”
Wimps Get Brawny
At the same time, wimpy nodes are evolving. Although low-power processors such as the Intel Atom and the ARM chips used by Calxeda can’t handle as much memory as “brawny” servers chips from Intel and AMD, newer versions are on the way — and these will shrink the memory gap. Facebook has said it can’t move to ARM chips because of the memory limitations, but it has also indicated it could move to wimpy cores once those limitations are resolved.
As the chips evolve, the rest of the system is evolving around them. Dave Anderson’s array uses flash storage rather than hard disks, and similar research from Steve Swanson — a professor of computer science and engineering at the University of San Diego — has shown wimpy nodes and flash go hand-in-hand. If you move to flash — the same solid-state storage used by smartphones — in place of spinning hard drives, you can use chips with lower clock speeds.
An old fashioned hard drive burns about 10 watts of power even when it’s doing nothing. In order to get the most out of the drive, you need a fast processor. But flash storage doesn’t burn that much power when idle, and that means you can use slower chips. “Adding solid state drives lets you use wimpier cores without giving up as much energy efficiency as you would if you were using a hard drive,” Swanson says. “With a hard drive, you want to use a faster core because it can access the hard drive and then race ahead as quickly as possible for the next access. With a solid state drive, it’s less critical that the processor race ahead to save power while the drive is idle.”
Anderson is also looking at ways to better balance workloads across wimpy node systems — an issue Urs Hölzle alludes to in his paper. “It is a problem,” he says, “but it’s a solvable problem. It just takes research and programmer effort to solve it.” What Hölzle identifies as difficulties, Anderson prefers to think of as research opportunities.
This includes software rewrites. In the short term, many companies — including Google — will frown on the idea. But in the long term, this changes. Since Hölzle published his paper, Google has resolved to rewrite its backend software — which is now being stretched into its second decade — and the new platform may very well move closer to the wimpy end of the spectrum.
Dave Anderson isn’t just looking at how wimpy core systems can be used today. He’s looking at how they can be used tomorrow. “If you came to me and you said: ‘Hey, Dave, how should I build my data center?’, I would not tell you to go and use the wimpiest cores you could find. That’s how I built mine, but I’m trying to push the limit and understand how to make these things practical.”
A geek with a hat » Why programmers work at night
And sure enough, ask a random programmer when they do their best work and there’s a high chance they will admit to a lot of late nights. Some earlier, some later. A popular trend is to get up at 4am and get some work done before the day’s craziness begins. Others like going to bed at 4am.
At the gist of all this is avoiding distractions. But you could just lock the door, what’s so special about the night?
I think it boils down to three things: the maker’s schedule, the sleepy brain and bright computer screens.
The maker’s schedule
Paul Graham wrote about the maker’s schedule in 2009 – basically that there are two types of schedules in this world (primarily?). The traditional manager’s schedule where your day is cut up into hours and a ten minute distraction costs you, at most, an hour’s worth of time.
On the other hand you have something PG calls the maker’s schedule – a schedule for those of us who produce stuff. Working on large abstract systems involves fitting the whole thing into your mind – somebody once likened this to constructing a house out of expensive crystal glassand as soon as someone distracts you, it all comes barreling down and shatters into a thousand pieces.
This is why programmers are so annoyed when you distract them.
Because of this huge mental investment, we simply can’t start working until we can expect a couple of hours without being distracted. It’s just not worth constructing the whole model in your head and then having it torn down half an hour later.
In fact, talking to a lot of founders you’ll find out they feel like they simply can’t get any work done during the day. The constant barrage of interruptions, important stuff ™ to tend to and emails to answer simply don’t allow it. So they get most of their “work work” done during the night when everyone else is sleeping.
The sleepy brain
But even programmers should be sleeping at night. We are not some race of super humans. Even programmers feel more alert during the day.
Why then do we perform our most mentally complex work work when the brain wants to sleep and we do simpler tasks when our brain is at its sharpest and brightest?
Because being tired makes us better coders.
Similar to the ballmer peak, being tired can make us focus better simply because when your brain is tired it has to focus! There isn’t enough left-over brainpower to afford losing concentration.
I seem to get the least work done right after drinking too much tea or having a poorly timed energy drink. Makes me hyperactive and one second I’m checking twitter, the next I’m looking at hacker news and I just seem to be buzzing all over the place..
You’d think I’d work better – so much energy, so much infinite overclocked brainpower. But instead I keep tripping over myself because I can’t focus for more than two seconds at a time.
Conversely, when I’m slightly tired, I just plomp my arse down and code. With a slightly tired brain I can code for hours and hours without even thinking about checking twitter or facebook. It’s like the internet stops existing.
I feel like this holds true for most programmers out there. We have too much brainpower for ~80% of the tasks we work on – face it, writing that one juicy algorithm, requires ten times as much code to produce an environment in which it can run. Even if you’re doing the most advanced machine learning (or something) imaginable, a lot of the work is simply cleaning up the data and presenting results in a lovely manner.
And when your brain isn’t working at full capacity it looks for something to do. Being tired makes you dumb enough that the task at hand is enough.
Bright computer screens
This one is pretty simple. Keep staring at a bright source of light in the evening and your sleep cyclegets delayed. You forget to be tired until 3am. Then you wake up at 11am and when the evening rolls around you simply aren’t tired because hey, you’ve only been up since 11am!
Given enough iterations this can essentially drag you into a different timezone. What’s more interesting is that it doesn’t seem to keep rolling, once you get into that equilibrium of going to bed between 3am and 4am you tend to stay there.
Or maybe that’s just the alarm clocks doing their thing because society tells us we’re dirty dirty slobs if we have breakfast at 2pm.
To conclude, programmers work at night because it doesn’t impose a time limit on when you have to stop working, which gives you a more relaxed approach, your brain doesn’t keep looking for distractions and a bright screen keeps you awake.