Andrei Alexandrescu has written a nice article, "The Case for D" (click on 'Print' to read it on a single page):
D1 is a very nice language, and I use it often, but this article shows too much the good sides of the D2 language and its compilers, focusing on what it may do in future, ignoring their numerous current downsides and problems. Giving false expectations in possible new D users is dangerous. I think that giving a more balanced account of the current situation is better, even if in future most of current D problems may be fixed.
A good article must show the current troubles of the language too, and not just talk about good implementations that may be found years from now. At the moment Java is a very fast language, the compiler helps the programmer avoid many bug-prone situations, and the toolchain is very good. But at the beginning Java was really slow and of limited usefulness, it was little more than a toy.
This post isn't a list of all faults I see in the D language, it's a list of comments about the article by Andrei Alexandrescu.
From the article:
>In the process, the language's complexity has increased, which is in fact a good indicator because no language in actual use has ever gotten smaller.<
D2 language is more complex than D1, and even if each thing added to D may have its justifications, C++ language clearly shows that too much complexity is bad. So higher complexity is not a good indicator.
>Other implementations are underway, notably including an a .NET port and one using the LLVM infrastructure as backend.<
The LDC compiler (with LLVM backend) is already usable on Linux to compile D1 code with the Tango standard lib (but it lacks the built-in profiler). On windows LLVM lacks exception support, so it can't be used yet.
>D could be best described as a high-level systems programming language.<
It may be quite hard to think about using D to write something like the Linux kernel, or to write code for little embedded systems. D compiled programs are too much big for embedded systems with few kilobytes of RAM, an the D language relies too much on the GC (even if it can be switched off, etc) to be a good tool to write real-world kernel.
So D is currently more like a systems programming-like language. A multi-level language that can be used to write code quite close to the 'metal' or to write high-level generic code too.
>It encompasses features that are normally found in higher-level and even scripting languages -- such as a rapid edit-run cycle,<
Being made of compiled modules, the edit-run cycle in a D program can be as fast as in other languages like C# and Java.
>In fact, D can link and call C functions directly with no intervening translation layer.<
On Windows you usually have to compile the C code with DMC to do this.
>However, you'd very rarely feel compelled to go that low because D's own facilities are often more powerful, safer, and just as efficient.<
In practice currently there are situiations where using C-style code can lead to higher performance in D1 (especially if you use the DMD compiler instead of the LDC one).
>support for documentation and unit testing is built-in.<
Such things are very handy and nice. But the current built-in support for documentation has many bugs, and the built-in unit testing is very primitive and limited: for example tests have no name, they just contain normal code and assert(), and their running stops as soon as the first assert fails.
return printf("hello, world\n") < 0;
This may be more correct C:
if (printf("hello, world\n") >= 0)
>(and T!(X) or simply T!X for T
In D1 the T!X syntax isn't supported. In D2 there's another rule, you can't write:
This is an example where things are more complex in D2 just to save two chars.
>D's unit of compilation, protection, and modularity is the file. The unit of packaging is a directory.<
D module system is nice and handy, but it currently has several bugs, and it has some semantic holes.
The sensation it leaves in the programmer is that its design was started well, but then the development of such design has stopped mid-course, leaving some of its functionalities half-unfinished.
For example if you import the module 'foo', in the current namespace it imports not just 'foo', but all the names contained into 'foo', and the 'foo' name itself. This is silly.
There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default berhavour, and this is bad).
Another downside is that all current D compilers aren't able to follow the module tree by themselves to compile code, so you need to tell the compiler all the modules you need to compile, even if such information is already fully present in the code itself. There are several tools that try to patch this basic functionality hole (very big programs need more complex building strategies, but experience shows me that most small D programs can be fine with that automatic compilation model).
>* One, the language's grammar allows separate and highly optimized lexing, parsing, and analysis steps.<
This also has the downside that it limits the possible syntax that can be used in the language, for example it makes this code impossible:
foreach (i, item in items)
Forcing the language to use this, that is a bit less readable and a little more bug-prone:
foreach (i, item; items)
>* Three, Walter Bright, the creator and original implementor of D, is an inveterate expert in optimization. <
This is probably true, despite this the backend of DMD produces not much efficient code. LDC (LLVM-backend) is generally much better in this.
Update1, Jun 17 2009: DMD (especially DMD D1) is faster than LDC in compiling code.
>Other procedural and object-oriented languages made only little improvements,<
Untrue, see Clojure and Scala. Hopefully D will do as well or better.
Update1, Jun 17 2009: both Clojure and Scala run on the JVM, so the situation is different.
>a state of affairs that marked a recrudescence of functional languages<
Some other people may talk about a reinassance, instead :-)
>SafeD is focussed only on eliminating memory corruption possibilities.<
It may be better to add other safeties to such SafeD modules.
>That makes Java and C# code remarkably easy to port into a working D implementation.<
It's indeed quite easy to port C/Java code to D. But translating C headers to D may require some work. And currently the D garbage collector is much less efficient than the common Java ones, so D requires code that allocates less often.
Update1, Jun 17 2009: there are tools that help convert C headers to D.
>such as an explicit override keyword to avoid accidental overriding,<
>and a technique I can't mention because it's trademarked, so let's call it contract programming.<
It's built-in in the language. It's not implemented in a very complete way, but it may be enough if you aren't used to Eiffel.
>The implementation now takes O(n) time, and tail call optimization (which D implements) takes care of the space complexity.<
At the moment only the LDC compiler (a D1 compiler) is able to perform tail-call elimination (and probably only in simple situations. But probably as LLVM improves, LDC will improve).
Update1, Jun 17 2009: I was wrong, DMD is able to tail-call optimize if the situation is simple.
>iron-clad functional purity guarantees, and comfortable implementation when iteration is the preferred method. If that's not cool, I don't know what is.<
At the moment calls to pure functions aren't moved out of loops. There can be problems if the pure function generates an out of memory exception, or if it's involved a change in the floating point rounding mode.
Functional programming juggles lot of immutable data, and this puts the garbage collector under a high pressure. Currently the D GC isn't efficient enough for such quick cycles of memory allocation, so it's not much fit yet for functional-style programming (or Java-style Object Oriented style of programming that allocates very frequently).
All this isn't meant to discourage you from using the D1/D2 languages.
Update1, Jun 17 2009:
See also the discussion on Reddit:
Answers to the received comments:
Thank you Anonymous for your large amount of comments. I'll fix the blog post where I see it's necessary. Your comments will help me a lot in improving my blog post.
>For exception support, it's more C++'s LLVM and Windows SEH issue, to get it right.<
Eventually LLVM/Clang developers will support exceptions on Windows. Several things tell me that LDC will be a good compiler.
>As for profiler, I believe you can compile to LLVM bytecode and profile that by LLVM tools, but well, it's ugly.<
Some things are already possible (I am trying KCachegrind now), but DMD is quite more handy, you can just add a "-profile" and it just works. (Code coverage of DMD too is handy, but it doesn't work on some bigger programs of mine). Walter has said more than one time that having easy to use tools helps people use them more often.
>but what we actually want are just more tools and more mature tools.<
Command-line features like DMD profiler are enough for me in many situations.
>Well, there is actually microkernel OS in D around:<
I know, but I have read an half-serious proposal to create another compiler to compile the Linux kernel because GCC isn't too much fit for this purpose. So I guess D compilers too may be even less fit for that purpose.
On the other hand Microsoft is trying to use a modified C# to write a OS (and they say the extra safety offered by C# allows to avoid some controls in the code, and this ends up creating globally efficient enough code), so it may be doable in D too.
>D programs are somewhat bigger minimal C apps (and esp., compiled by LLVM LDC) because of 3 things:<
A GC can't be avoided, but maybe it's possible to keep it outside, dynamically linked.
The runtime contains unicode management, associative arrays, dynamic arrays and more, but it may be possible to strip away some of such things when not used.
>(as a example of such multi-level language, but I'd like to see OMeta-like stuff for D better).<
OMeta is the future :-)
See also Pymeta, Meta for Python:
>Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.<
I'd like the built-in unittest systems to be a bit more powerful, or you can of course re-implement them outside the language, but then it's better to remove the built-in unittest features. Keeping both is not good.
>That's actually matter not compiler itself, but your build system.<
The DMD compiler already has built-in things that are beyond the purposes of a normal compiler. Adding this automatic build feature isn't essential but it's handy and positive.
>Hey, that 'item in items' stuff is not D semantic, and has nothing to with compiler itself.<
D compiler is designed in several separated layers. So it seems that to change the syntax adding an "in" inside the foreach you have to add some feedback between layers, and this is seen as bad for the compiler (and probably Walter is right here).
Imports are already private by default now in D. The problems are quite more big here.
>new instaneous dee0xd<
Never seen that before.
>Arguable: dmd still compiles faster, and binary sizes are smaller. LLVM optimizations are much more promising, though.<
In most of my benchmarks LDC produces programs that are faster or much faster. DMD indeed compiles faster (DMD of D2 is a bit less fast). Binary sizes produced by LDC are sometimes bigger but they are working on this, and most times the size is similar.
>Somewhat different playgrounds here: JVM-based or self-hosted.<
You are right, the situation is different. But I think you can implement Clojure multiprocessing ideas even without a VM.
>Just stub your own GC in. There are different GC strategies after all, why to hope 'one size fitts all' on every cases?<
Indeed, JavaVM ships with more than one GC to fulfill different purposes.
My own GC is probably going to be worse than the current built-in one. I am not able to write a GC as good JavaVM ones. So what you write here is not good.
>Java GC's was much worse than Oberon's btw, when it just appeared.<
Java at the beginning was WAY worse, I know, I have stated this at the beginning of my blog post.
>And if you have many of 'quick cycles of memory allocation', something is wrong with your memory allocator. It's not better when you have lotso manual malloc/free, its better when you have memory pools, arenas, zones, and right allocation (or GC) strategy, which fits better for you app.<
If you look at most Java programs you can often see many small objects allocated in loops.
At the same way, in functional-style languages/programs you can see lot of immutable data structures that are created and collected away all the time. From my benchmarks I think the current D GC isn't fit for such kinds of code.
>So I believe we can't rely on one single GC for all use cases, but we need lotso strategies and pluggable GC's for different uses cases and different strategies.<
I agree, but probably 2-3 GCs (built-in and switchable at compile time) can be enough for most D purposes. I am sure there are many ways to improve the current D GC (for example having a type system able to tell apart GC-manages pointers, and a hybrid moving\conservative GC that pins down memory manually managed, and moves and compacts all the other memory), my purpose was just to show and talk about the current situation.
>That shouldn't stop you in any way from using D<
Of course. I don't waste hours of my time commenting about a language I don't like to program with :-)
D is my second preferred language (after Python), I like it and I have written lot of D code :-)
Thank you again for all your comments, as you see I agree with most of the things you have written here.
|comments: Leave a comment|