A Better Way To Program
This video will change the way you think about programming. The argument is clear and impressive - it suggest that we really are building programs with one hand tied behind our backs. After you have watched the video you will want the tools demonstrated.
We often focus on programming languages and think that we need a better language to program better. Bret Victor gave a talk that demonstrated that this is probably only a tiny part of the problem. The key is probably interactivity. Don’t wait for a compile to complete to see what effect your code has on things - if you can see it in real time then programming becomes much easier. Currently we are programming with one arm tied behind our backs because the tools that we use separate us from what we write and what happens.
Interactivity makes code understandable.
Moving on, the next idea is that instead of reading code and understanding it, seeing what the code does is understanding it. Programmers can only understand their code by pretending to be computers and running it in their heads. As this video shows, this is increadibly inefficient and, as we generally have a computer in front of us, why not use it to help us understand the code?
All of this is explained and demonstrated in this long (1 hour) video. It also has the problem that it starts very slowly and is occasionally self indulgent. But, as they say, if you watch just one video this year make it this one.
It eventually gets going and it isn’t only about game programming at about 18 mins in you will find the same ideas applied to more abstract coding and even to other engineering disciplines.
There are some soco-political ideas explained along the way - feel free to disagree with them - but don’t ignore the important technical points being made.
The talk was given at CUSEC 2012 (The Canadian University Software Engineering Conference)
Bret Victor is clearly someone to keep an eye on. Have a look at his web site for even more really interesting ideas.
Advice From An Old Programmer — Learn Python The Hard Way, 2nd Edition
You’ve finished this book and have decided to continue with programming. Maybe it will be a career for you, or maybe it will be a hobby. You’ll need some advice to make sure you continue on the right path, and get the most enjoyment out of your newly chosen activity.
I’ve been programming for a very long time. So long that it’s incredibly boring to me. At the time that I wrote this book, I knew about 20 programming languages and could learn new ones in about a day to a week depending on how weird they were. Eventually though this just became boring and couldn’t hold my interest anymore. This doesn’t mean I think programming is boring, or that you will think it’s boring, only that I find it uninteresting at this point in my journey.
What I discovered after this journey of learning is that it’s not the languages that matter but what you do with them. Actually, I always knew that, but I’d get distracted by the languages and forget it periodically. Now I never forget it, and neither should you.
Which programming language you learn and use doesn’t matter. Do not get sucked into the religion surrounding programming languages as that will only blind you to their true purpose of being your tool for doing interesting things.
Programming as an intellectual activity is the only art form that allows you to create interactive art. You can create projects that other people can play with, and you can talk to them indirectly. No other art form is quite this interactive. Movies flow to the audience in one direction. Paintings do not move. Code goes both ways.
Programming as a profession is only moderately interesting. It can be a good job, but you could make about the same money and be happier running a fast food joint. You’re much better off using code as your secret weapon in another profession.
People who can code in the world of technology companies are a dime a dozen and get no respect. People who can code in biology, medicine, government, sociology, physics, history, and mathematics are respected and can do amazing things to advance those disciplines.
Of course, all of this advice is pointless. If you liked learning to write software with this book, you should try to use it to improve your life any way you can. Go out and explore this weird wonderful new intellectual pursuit that barely anyone in the last 50 years has been able to explore. Might as well enjoy it while you can.
Finally, I’ll say that learning to create software changes you and makes you different. Not better or worse, just different. You may find that people treat you harshly because you can create software, maybe using words like “nerd”. Maybe you’ll find that because you can dissect their logic that they hate arguing with you. You may even find that simply knowing how a computer works makes you annoying and weird to them.
To this I have just one piece of advice: they can go to hell. The world needs more weird people who know how things work and who love to figure it all out. When they treat you like this, just remember that this is your journey, not theirs. Being different is not a crime, and people who tell you it is are just jealous that you’ve picked up a skill they never in their wildest dreams could acquire.
You can code. They cannot. That is pretty damn cool.
Teach Your Kids How To Code, Not How To Speak Chinese
There is a belief among some — perhaps out of fear, or prudence — that children today should study Mandarin Chinese as their second language. If China is going to rule the world in a few decades, at least my kid will be able to communicate.
Image: Associated Press
That’s an interesting idea, but the reality is that no matter who is ruling the world, if your kids don’t live in China, their lives are much more likely to involve software than speaking Chinese.
So make sure the second language they study is code. Then their third language can be anything you’d like — Mandarin, Spanish, Latin, French, whatever.
That’s not to say that everyone should become a computer scientist — that’s not practical. But it’s a good idea for everyone to at least understand how computers and software work, and how to write rudimentary code. It can be as simple as HTML, or as complex as C — that part is up to the individual, and the actual languages will change every so often. But a little code is good for everyone.
Why? What’s the point?
Think about how profoundly software has changed industries like, say, communication. The phone in your pocket even ten years ago was lucky to have a black-and-white display with a built-in game like “Snake”. A decade later, your iPhone screen has more pixels than your old TV, thousands of software applications are a click away, and you can message someone across the world in a second.
Apply that change to every industry, from education — our focus today — to medicine, construction, the arts, etc. And we morph again, from a manufacturing economy to a service economy to a software economy. Again, not everyone will be writing code. But many more people will be ordering it, writing it, managing it, and interacting with it. It makes sense to understand it and to be able to create at least a little.
Personally, I don’t regret spending seven years of my life learning French — it’s cool to be able to say hello and order a croissant in Paris in the local language, before the waiter responds in perfect English.
But I do wish I’d spent at least some of that time instead learning how to write computer software.
As it is, I managed to self-teach myself HTML in the mid-90s, a skill I use every day. But I wish there was a stronger focus on computer engineering in my elementary and high school curriculum, even at the expense of a foreign language.
That’s not to say that people shouldn’t be able to learn Chinese. Various reports — the Economist, the New York Times — have traced the growth of Chinese language programs in American schools. But in reality, it’s not all that practical. See this point-counterpoint from BusinessWeek.
Some of our future will surely involve doing business with Chinese corporations and people. But much more of it will involve science, software, mathematics, and engineering. Software is the real future. So teach your child how to code first — and how to speak Chinese second.
Why we need even more programming languages
Whenever a new programming language is announced, a certain segment of the developer population always rolls its eyes and groans that we have quite enough to choose from already, thank you very much.
To its credit, Google says it’s doing just that , in tandem with its Dart effort. But once a language reaches a certain tipping point of popularity, overhauling it to include support for new features, paradigms, and patterns is easier said than done. In fact, judging by the past ten years or so, it can be very, very difficult.
The PHP 6 debacle
Take PHP, for example. The next version of the popular Web applications language shipped its second release candidate this week, and the final build is expected to arrive early next year. This release won’t be the long-awaited PHP 6, however. Instead, it will be a far less ambitious revision  designated PHP 5.4.
Doubtless that’s a disappointment to developers who have been anticipating PHP 6 since the project launched in October 2005. But at this point, even if a release dubbed PHP 6 does eventually appear, it will bear little resemblance to the version that’s borne the designation so far. PHP creator Rasmus Lerdorf officially shelved the PHP 6 project  in March 2010 after almost five years of fruitless labor, in favor of refocusing on a formal 5.4 release.
Some of the reasons for the PHP 6 effort’s failure were technical. The primary focus of the project was retooling PHP to include native support for Unicode . That wouldn’t be limited to strings, either; in PHP 6, developers would be able to specify variable names, function names, and other identifiers using any Unicode script, including multibyte encodings such as Chinese and Hindi. As the years rolled on and hidden gotchas began to surface, however, it became clear that the PHP developers had bitten off more than they could chew.
It didn’t help that as an open source project, PHP development is largely a volunteer effort. According to PHP contributor Andrei Zmievski, relatively few developers  really understood the Unicode push and were committed to making it happen. It was hard to get excited about rewriting lots of working code to support Unicode, and enthusiasm for the project waned. By the time PHP 6 was abandoned in 2010, Lerdorf observed  that PHP development “hasn’t been fun for quite a while.”
Languages move forward, but slowly
Personally, I’m no fan of PHP. I’ve long held it’s a poster child for bad language design, so it doesn’t surprise me to learn that evolving it into something better is a Sisyphean task. But I shouldn’t single out PHP here. In fact, many of the more popular languages have struggled to move forward with major new versions in recent years.
The best example is surely Perl, whose community has been working on version 6 continuously since 2000. In 2008, the Perl community chastised me  for suggesting that Perl 6 was “vaporware.” It exists, they insisted, and you can use it today. But while that may be technically true, whether Perl 6 will ever be mature enough for production use remains an open question. Although several implementations are available, and some are even fairly stable , so far none of them supports all of the features  of the Perl 6 specification.
The Python community has had better luck implementing its language. The most recent version, Python 3, was released in 2008, after about three years of development. Three years later, however, adoption remains slow. Python 3 took the radical step of breaking backward compatibility with earlier versions, and a number of popular Python libraries and frameworks (including Django ) have yet to catch up. As a result, many Python developers still cling to version 2.x , particularly for Web work, and widespread migration to Python 3 is expected to take several more years.
These kinds of difficulties aren’t limited to scripting languages, either. The Java community has long clamored for significant language updates. Before Java SE 7 shipped  earlier this year, it had been five years since the last major release. But while Java SE 7 was originally expected to include much-requested capabilities such as lambda expressions and a module system, Oracle has since delayed those features  until Java SE 8 at the earliest.
Not that the ECMAScript working group had been idle throughout those years. Work on ECMAScript 4 began in 1999, but soon foundered and went on hiatus. The committee reconvened in 2003, but still the various stakeholders couldn’t agree. Around and around they went for another five years, until — much like PHP 6 — the ECMAScript 4 effort was officially abandoned in 2008 , in favor of a less-contentious specification that became ECMAScript 3.1.
The lesson from all of these examples is clear: Programming languages move slowly, and the more popular a language is, the slower it moves. It is far, far easier to create a new language from whole cloth than it is to convince the existing user base of a popular language to accept radical changes.
This article, “Why we need even more programming languages ,” originally appeared at InfoWorld.com . Read more of Neil McAllister’s Fatal Exception blog  and follow the latest news in programming  at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter .
Dennis Ritchie: The Shoulders Steve Jobs Stood On | Wired Enterprise
And then some.
“When Steve Jobs died last week, there was a huge outcry, and that was very moving and justified. But Dennis had a bigger effect, and the public doesn’t even know who he is,” says Rob Pike, the programming legend and current Googler who spent 20 years working across the hall from Ritchie at the famed Bells Labs.
On Wednesday evening, with a post to Google+, Pike announced that Ritchie had died at his home in New Jersey over the weekend after a long illness, and though the response from hardcore techies was immense, the collective eulogy from the web at large doesn’t quite do justice to Ritchie’s sweeping influence on the modern world. Dennis Ritchie is the father of the C programming language, and with fellow Bell Labs researcher Ken Thompson, he used C to build UNIX, the operating system that so much of the world is built on — including the Apple empire overseen by Steve Jobs.
“Pretty much everything on the web uses those two things: C and UNIX,” Pike tells Wired. “The browsers are written in C. The UNIX kernel — that pretty much the entire Internet runs on — is written in C. Web servers are written in C, and if they’re not, they’re written in Java or C++, which are C derivatives, or Python or Ruby, which are implemented in C. And all of the network hardware running these programs I can almost guarantee were written in C.
“It’s really hard to overstate how much of the modern information economy is built on the work Dennis did.”
Even Windows was once written in C, he adds, and UNIX underpins both Mac OS X, Apple’s desktop operating system, and iOS, which runs the iPhone and the iPad. “Jobs was the king of the visible, and Ritchie is the king of what is largely invisible,” says Martin Rinard, professor of electrical engineering and computer science at MIT and a member of the Computer Science and Artificial Intelligence Laboratory.
“Jobs’ genius is that he builds these products that people really like to use because he has taste and can build things that people really find compelling. Ritchie built things that technologists were able to use to build core infrastructure that people don’t necessarily see much anymore, but they use everyday.”
From B to C
Dennis Ritchie built C because he and Ken Thompson needed a better way to build UNIX. The original UNIX kernel was written in assembly language, but they soon decided they needed a “higher level” language, something that would give them more control over all the data that spanned the OS. Around 1970, they tried building a second version with Fortran, but this didn’t quite cut it, and Ritchie proposed a new language based on a Thompson creation known as B.
Depending on which legend you believe, B was named either for Thompson’s wife Bonnie or BCPL, a language developed at Cambridge in the mid-60s. Whatever the case, B begat C.
B was an interpreted language — meaning it was executed by an intermediate piece of software running atop a CPU — but C was a compiled language. It was translated into machine code, and then directly executed on the CPU. In those days, C was considered a high-level language. It would give Ritchie and Thompson the flexibility they needed, but at the same time, it would be fast.
That first version of the language wasn’t all that different from C as we know it today — though it was a tad simpler. It offered full data structures and “types” for defining variables, and this is what Richie and Thompson used to build their new UNIX kernel. “They built C to write a program,” says Pike, who would join Bell Labs 10 years later. “And the program they wanted to write was the UNIX kernel.”
Ritchie’s running joke was that C had “the power of assembly language and the convenience of … assembly language.” In other words, he acknowledged that C was a less-than-gorgeous creation that still ran very close to the hardware. Today, it’s considered a low-level language, not high. But Ritchie’s joke didn’t quite do justice to the new language. In offering true data structures, it operated at a level that was just high enough.
“When you’re writing a large program — and that’s what UNIX was — you have to manage the interactions between all sorts of different components: all the users, the file system, the disks, the program execution, and in order to manage that effectively, you need to have a good representation of the information you’re working with. That’s what we call data structures,” Pike says.
“To write a kernel without a data structure and have it be as consist and graceful as UNIX would have been a much, much harder challenge. They needed a way to group all that data together, and they didn’t have that with Fortran.”
At the time, it was an unusual way to write an operating system, and this is what allowed Ritchie and Thompson to eventually imagine porting the OS to other platforms, which they did in the late 70s. “That opened the floodgates for UNIX running everywhere,” Pike says. “It was all made possible by C.”
Apple, Microsoft, and Beyond
At the same time, C forged its own way in the world, moving from Bell Labs to the world’s universities and to Microsoft, the breakout software company of the 1980s. “The development of the C programming language was a huge step forward and was the right middle ground … C struck exactly the right balance, to let you write at a high level and be much more productive, but when you needed to, you could control exactly what happened,” says Bill Dally, chief scientist of NVIDIA and Bell Professor of Engineering at Stanford. “[It] set the tone for the way that programming was done for several decades.”
As Pike points out, the data structures that Richie built into C eventually gave rise to the object-oriented paradigm used by modern languages such as C++ and Java.
The revolution began in 1973, when Ritchie published his research paper on the language, and five years later, he and colleague Brian Kernighan released the definitive C book: The C Programming Language. Kernighan had written the early tutorials for the language, and at some point, he “twisted Dennis’ arm” into writing a book with him.
Pike read the book while still an undergraduate at the University of Toronto, picking it up one afternoon while heading home for a sick day. “That reference manual is a model of clarity and readability compared to latter manuals. It is justifiably a classic,” he says. “I read it while sick in bed, and it made me forget that I was sick.”
Like many university students, Pike had already started using the language. It had spread across college campuses because Bell Labs started giving away the UNIX source code. Among so many other things, the operating system gave rise to the modern open source movement. Pike isn’t overstating it when says the influence of Ritchie’s work can’t be overstated, and though Ritchie received the Turing Award in 1983 and the National Medal of Technology in 1998, he still hasn’t gotten his due.
As Kernighan and Pike describe him, Ritchie was an unusually private person. “I worked across the hall from him for more than 20 years, and yet I feel like a don’t knew him all that well,” Pike says. But this doesn’t quite explain his low profile. Steve Jobs was a private person, but his insistence on privacy only fueled the cult of personality that surrounded him.
Ritchie lived in a very different time and worked in a very different environment than someone like Jobs. It only makes sense that he wouldn’t get his due. But those who matter understand the mark he left. “There’s that line from Newton about standing on the shoulders of giants,” says Kernighan. “We’re all standing on Dennis’ shoulders.”
Additional reporting by Jon Stokes.
You’ve Got To Admit It’s Getting Better | TechCrunch
“I hate almost all software. It’s unnecessary and complicated at almost every layer … you don’t understand how fucked the whole thing is,” rants Ryan Dahl, the much- (and rightly-) lauded creator of Node.js. “It really, truly, is all crap. And it’s so much worse than anybody realizes,” agrees Zack Morris, who goes on to add, “The industry has backed itself into a corner and can’t even see that the way forward requires thinking outside the box.”
Investors and managers may not realize it, but the coders who do their work are in a collective state of angry ferment. Complaints about the state of modern software engineering multiply everywhere I look. Scrum, the state-of-the-art project-management methodology, is under attack: “I can only hope that when Scrum goes down it doesn’t take the whole Agile movement with it,” says Robert Martin, complaining about elitism and the rise of meaningless ‘Scrum Master’ certifications. Pawel Brodzinski disparages software certifications from a different angle: “It seems certification evaluates people independently and is objective. Unfortunately it’s also pretty much useless.”
Even test-driven development — the notion that a development team’s automated tests are even more important than the actual software they write, and should be written first — is being criticized. Once this belief seemed almost sacrosanct (although in my experience most of the industry paid it only lip service.) Now, though, Pieter Hintjens argues, “The more you test software, the worse it will be.” Peter Sargeant agrees: “The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal.”
None of the above are wrong. Morris’s exegesis of the problematic process of iOS app development is spot on: beneath the slick exterior of Apple’s XCode environment and Objective-C language lie squirming Lovecraftian horrors from the 1980s like preprocessor macros, forests of cryptic compile/link flags and paths, scheme/project/target confusion, etc etc etc. Android development is better in some ways, but its recommended Eclipse environment is ugly, clunky and sometimes only barely comprehensible. Certifications seem to me (with some exceptions) mostly to be red flags that warn: “This person thinks that merely learning a new toolset is a significant feat that deserves recognition.” Test strategies need to be customized for the problem, not the other way around.
But I’m struck by how the anger and frustration cited above is so out-of-sync with my own experience. I’ve been writing code for money for twenty years, with a six-year interregnum from 2003 to 2009, because I got a book deal and spent that time writing novels full-time. When I got back into programming two years ago, I was struck by how much better things had gotten. Ham-handed languages like Perl and C++ have been largely replaced by elegant Ruby and Python, at least among startups. StackOverflow solves many problems before they even begin to grate. Instead of futzing around with server configurations and dealing with trainwrecks like J2EE, anyone can easily deploy and run code on the App Engine or Heroku clouds — for free!
Take Java. (Please.) People have been criticizing it since its birth; witness Jamie Zawinski’s fourteen-year-old takedown of the language. But also note that he praises it for being much better than its predecessors, and that Heroku this week announced support for its most likely successor, Scala. The rants above aren’t wrong; the state of the art isn’t great; but it’s important to recognize that it’s a lot better than it used to be. Some improvements, like test-driven development and agile methodologies, need further iteration. Others simply aren’t cost-effective to deploy right now.
Consider wind and solar power. They’re the future of energy generation, everyone knows that, but because we’ve already sunk trillions into fossil-fuel infrastructure, we can’t switch over to them immediately. Instead we’ll have to suffer through a bumpy, painful, decades-long transition — but at least we’re on a path to get there eventually. Similarly, functional programming, NoSQL databases, and other innovations may be the future of software, but it’s delusional to think that we can or should move to their wholesale adoption tomorrow. Today’s software is generally a mess, yes, but the important thing is that we’re moving in the right direction. Let’s remember that — and remember that until we get there, the best will remain the enemy of the good.
Image credit: Dana Robinson, Flickr (slightly repurposed)
Haha… Functional programming might be the innovation of the future. Maybe CMU is doing something right after all.
Scala use is less good than Java use for at least half of all Java projects - Good Stuff
So, I made a post about agreeing with a post about Scala being “hard” for a large portion of Java developers. This post caused a fair amount of “discussion” much of which misinterpreted my post. I am making a post that I hope will be more clear in terms of the who, why, and my motivation.
First, about me. My name is David Pollak. I’ve been active in the Scala community since November 2006. I am the founder of the Lift web framework project. Here are the important bullet points:
- I have been writing Scala code continuously for longer than anyone outside of EPFL. [update - @propensive and Bill Venners have been in the Scala community longer than I have.]
- I started the first Scala conference in 2008, the Scala Lift Off, and continue to run that conference to this day.
- I have written more lines of Scala code (more than 250K) than almost anyone on the planet.
- I have written a popular book introducing Scala called Beginning Scala.
- I have had more than 10,000 interactions related to Scala and Lift on the Scala and Lift mailing lists over the last almost 5 years (that means I’ve interacted with a lot of developers.)
- I founded the Lift web framework project and have written a substantial portion of Lift including designing Lift’s core APIs. Lift is one of the most popular Scala-based frameworks and the first broadly known Scala library.
- I have code reviewed more than 500,000 lines of Scala code in the last year.
- I make my living consulting, mainly on Scala and Lift related projects and have broad exposure to lots of projects.
- I have given more than 25 presentations on Scala and Lift over the last 5 years.
- I have taught hundreds of people to use Scala and Lift in small groups.
- I introduced Scala, either directly or indirectly, to half of the big-name Scala users listed on TypeSafe’s home page.
All of the above is to say, “I have substantial experience coding Scala, teaching Scala, and introducing Scala into a wide variety of environments. I have interacted with enough Scala users and prospective Scala users to have a broad base of data to draw from in my analysis of Scala success factors.” This does not mean that I think I’m right and everyone else is wrong. It does, however, mean that people who use ad hominem rhetoric to argue against my posts have very little validity or value in their posts. [Update: re-reading @fogus’s post… I mis-interpretted it. I think @fogus and I reach the same conclusions. I stand by my assertions that I’m not perpetuating a meme. However, I apologize for harshing on @fogus and his post!]
With that as my setup, let me give you my conclusion:Scala is an inappropriate language for the majority of Java developers and cannot be expected to replace Java because for at least 50% of Java developers, Scala’s difficulty outweighs its value.
This should be no more controversial than:Java is an inappropriate language for the majority of PHP developers and cannot be expected to replace PHP because for at least 50% of PHP developers, Java’s difficulty outweighs its value.
While I used “Scala is hard for some developers” in my last blog post, I’m being a lot more precise here. I did not say that I find Scala difficult. I have however, observed a lot of developers who find Scala difficult and I’ve outlined my reasoning here and here. For that class of developer, the value of Scala is outweighed by the costs of Scala.
One of the things I’m particularly bad at is spelling. My brain just doesn’t remember the spelling of words. When I was the editor of my college newspaper, one of the reporters got up in my face about being lazy because I couldn’t spell (this was 1985… before spell checkers.) I wrote a integral on the blackboard (this was before whiteboards) and asked her to solve it. She said, “That’s math… that’s hard… I’m talking about something as simple as spelling.” Well, for me, I can solve integrals in my sleep and I can’t spell to save my life. Different people have different skills.
Different people value different things. There is a class of people who love computers and love to code. I’m in that class. There is a class of people who would not put coding in their list of top 5 things they enjoy doing.
There is a large set of developers who have chosen development as their career who lack some combination of innate ability and motivation. There are schools that foster this mentality. No amount of blogging and blustering will change this.
For those who lack a combination of the innate ability to code and the interest in improving themselves, Scala is a liability. If there are too many Scala liabilities floating around (i.e., failed Scala projects), Scala will cease to grow and that’s a seriously suboptimal situation for people like me that have invested more than $600,000 in the Scala ecosystem. More on motivation at the end of the post.
One of the key arguments against my “Virgina” post was that we must take the Java developer pool as it is because Scala is not better enough to cause a material improvement in the overall quality of the pool. Quoting me:I am explicitly rejecting the argument “well, then, find better developers.” We could solve the “Scala is hard” problem by working to improve the overall quality of developers (ones who can read a type signature, ones who can express their programs mathematically, etc.), but that misses the point. The point is that Scala is not better enough to force a revolution in training, education, and hiring such that Scala will be able to change the quality of the average developer enough to make Scala not hard for that developer.
I agree with Paul Snively that Scala is learnable. That’s exactly why I’ve been actively promoting Scala for years. The issue is that Scala is learnable by a certain class of people. That class does not include those that don’t want to learn it and those that don’t have the ability to learn Scala (just as there are those that can code PHP but could not code Java.)
In fact, I have been saying there is a class of developers for whom Scala is not appropriate for nearly four years… perhaps longer, but that’s the oldest post I could find. My position at the time was not dis-similar from the position that many are taking today: train or fire the developers who are less productive with Scala. I have also been pretty clear that there’s a class of Java-only developer that does not succeed with Scala for at least three years. So, my position has been pretty consistent for many years.
What has changed is that I’ve realized that there’s a vast quantity of development shops where the developers show up, have a few meetings, write a few lines of code and go home. I’ve had experience with three instances of that kind of company over the last year. One adopted Scala, is struggling, but trying to do the right thing despite problems with Java written in Scala, recruiting developers, and internal resistance. One made the decision not to adopt Scala (although there’s a little Scala that the VP Eng doesn’t know about in production, but it’s maintained by one person for whom Scala is vastly better at the task than Java.) One made the decision to roll back from Scala back to Java because the institutional cost of replacing half their development staff, sending 25% of the remaining staff to expensive courses, and foregoing outsourcing parts of the project outweighed the value that Scala was bringing to the top 3 developers (although it was a very tough and close call.)
We live in a world where the average developer writes 3,250 lines of code per year (about 20 per day). This is going into Eclipse, pressing the “give me pattern X” and filling in the blanks, then going to a few meeting and calling it a day. We cannot fire all these developers. We cannot train these developers to be better. This is the Center of the Mean. This is developer who may be the butt of a Dilbert cartoon. But you know what, that’s who uses Java. And you know what else, that developer lacks some combination of the innate ability and will to get better. Not only that, but all the way up that developer’s management chain, there is not the ability or will to change the situation. We cannot move this mountain… or more specifically, Scala is not better enough than anything else to allow shops to fire 50% of their less productive developers.
So, the better courses of action would be to:
- Focus on the kinds of developers and projects that will get a 3x or more out of Scala; or;
- Improve Scala for the COTM developer (this is not going to happen as long as Scala is primarily a research driven language)
So, “what,” you ask, “is your motivation in writing a series of pieces that make it seem like Scala is not successful and is hard?”
I have been a Scala supporter and fan since the time I encountered the language nearly five years ago. There are few people who have moved the cause of Scala along more than I have and I have a strong vested interest in seeing Scala and Lift continue to succeed.
Scala has seen some remarkable success stories. The quotes and articles and general sense of awesomeness around Scala is both tremendous and well deserved. Scala is a remarkable language and a language that has no equal in computing today in terms of its versatility for solving a broad range of simple to complex problems.
But, Scala is not a cure-all. Scala is successful in places that hire excellent developers. Scala allows those developers a higher multiplier on their ability than Java or most other languages. But in less-than-expert hands or worse, in the hands of those that resent Scala, it’s less good than Java. It leads to team strife and discord that, given the team type, often manifests itself in passive-aggressive ways where ship-dates slip and ultimately the language is blamed.
In order for Scala to continue to grow, there must continue to be Scala successes with a minimum of Scala failures. This means that being open and honest about Scala’s strengths and weaknesses is imperative so that the right places choose Scala and Lift.
Scala’s growth means that Scala must be adopted by the kinds of developers that have the right intersection of ability and desire to build amazing things with Scala.
We must accept that Scala’s not going to displace Java for green-screen, CRUD, DB front end applications. Scala’s value for doing ORM (sorry Max… Squeryl is really amazing) is way less than Scala’s value for doing real-time, distributed, concurrent applications. But most of the world is doing ORM, CRUD, green-screen, fill-in-the-form-and-update-the-database kind of stuff. That’s where most of the developers and money is. While the cool kids are building massively mumble, event-driven, mega-hyper-super-collosal-data, real-time, buzz-word-buzz-word stuff, most of the developers are doing the boring stuff of moving data in and out of the database and Oracle/SQL offers a perfectly reasonable concurrency model.
As a community, we must accept Scala’s weaknesses. We must recruit developers who are going to take advantage of Scala’s strengths (which are numerous.) We must also actively discourage folks from using Scala is there’s a reasonable likelihood of failure. Having a “Scala is hard” or “Scala is for really good developers” brand is a lot better than having a “Scala is risky and as often as not, leads to failure” brand.
My motivation in writing these posts is to raise this kind of awareness in the brains of developers. I’d rather see 5,000 new Scala projects in the next year where 4,000 are successful than see 50,000 new projects where 10,000 are successful. The developers who have the ability and inclination to be successful with Scala will likely read my posts and claim I don’t know what I’m talking about after experimenting with and succeeding with Scala. The rest of the developers will repeat the conclusion (“Scala is too hard”) and choose not to use it. And that outcome will likely lead to a lot more Scala success percent-wise and until Scala changes to be nice to COTM developers, that’s a much better path to success for Scala and Lift.
Oh… and all you wicked smart people who are pushing the boundaries (or think you will) with data size, event frequency and real-time stuff, you’ll find Scala to be a dream come true and there will be nothing like it that you’ve ever used (okay, except maybe Haskell). So, come, build your cool thing on Scala and succeed.