?

Log in

No account? Create an account

The Case for D - the other side of the coin - leonardo
View:Recent Entries.
View:Archive.
View:Friends.
View:Profile.
View:Website (My Website).

Tags:,
Security:
Subject:The Case for D - the other side of the coin
Time:12:27 am
Andrei Alexandrescu has written a nice article, "The Case for D" (click on 'Print' to read it on a single page):
http://www.ddj.com/hpc-high-performance-computing/217801225

D1 is a very nice language, and I use it often, but this article shows too much the good sides of the D2 language and its compilers, focusing on what it may do in future, ignoring their numerous current downsides and problems. Giving false expectations in possible new D users is dangerous. I think that giving a more balanced account of the current situation is better, even if in future most of current D problems may be fixed.

A good article must show the current troubles of the language too, and not just talk about good implementations that may be found years from now. At the moment Java is a very fast language, the compiler helps the programmer avoid many bug-prone situations, and the toolchain is very good. But at the beginning Java was really slow and of limited usefulness, it was little more than a toy.

This post isn't a list of all faults I see in the D language, it's a list of comments about the article by Andrei Alexandrescu.

From the article:

>In the process, the language's complexity has increased, which is in fact a good indicator because no language in actual use has ever gotten smaller.<

D2 language is more complex than D1, and even if each thing added to D may have its justifications, C++ language clearly shows that too much complexity is bad. So higher complexity is not a good indicator.


>Other implementations are underway, notably including an a .NET port and one using the LLVM infrastructure as backend.<

The LDC compiler (with LLVM backend) is already usable on Linux to compile D1 code with the Tango standard lib (but it lacks the built-in profiler). On windows LLVM lacks exception support, so it can't be used yet.


>D could be best described as a high-level systems programming language.<

It may be quite hard to think about using D to write something like the Linux kernel, or to write code for little embedded systems. D compiled programs are too much big for embedded systems with few kilobytes of RAM, an the D language relies too much on the GC (even if it can be switched off, etc) to be a good tool to write real-world kernel.

So D is currently more like a systems programming-like language. A multi-level language that can be used to write code quite close to the 'metal' or to write high-level generic code too.


>It encompasses features that are normally found in higher-level and even scripting languages -- such as a rapid edit-run cycle,<

Being made of compiled modules, the edit-run cycle in a D program can be as fast as in other languages like C# and Java.


>In fact, D can link and call C functions directly with no intervening translation layer.<

On Windows you usually have to compile the C code with DMC to do this.


>However, you'd very rarely feel compelled to go that low because D's own facilities are often more powerful, safer, and just as efficient.<

In practice currently there are situiations where using C-style code can lead to higher performance in D1 (especially if you use the DMD compiler instead of the LDC one).


>support for documentation and unit testing is built-in.<

Such things are very handy and nice. But the current built-in support for documentation has many bugs, and the built-in unit testing is very primitive and limited: for example tests have no name, they just contain normal code and assert(), and their running stops as soon as the first assert fails.


return printf("hello, world\n") < 0;

This may be more correct C:

if (printf("hello, world\n") >= 0)
return EXIT_SUCCESS;
else
return EXIT_FAILURE;


>(and T!(X) or simply T!X for T)<

In D1 the T!X syntax isn't supported. In D2 there's another rule, you can't write:
T!(U!(X))
As:
T!U!X
This is an example where things are more complex in D2 just to save two chars.


>D's unit of compilation, protection, and modularity is the file. The unit of packaging is a directory.<

D module system is nice and handy, but it currently has several bugs, and it has some semantic holes.

The sensation it leaves in the programmer is that its design was started well, but then the development of such design has stopped mid-course, leaving some of its functionalities half-unfinished.

For example if you import the module 'foo', in the current namespace it imports not just 'foo', but all the names contained into 'foo', and the 'foo' name itself. This is silly.

There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default berhavour, and this is bad).

Another downside is that all current D compilers aren't able to follow the module tree by themselves to compile code, so you need to tell the compiler all the modules you need to compile, even if such information is already fully present in the code itself. There are several tools that try to patch this basic functionality hole (very big programs need more complex building strategies, but experience shows me that most small D programs can be fine with that automatic compilation model).


>* One, the language's grammar allows separate and highly optimized lexing, parsing, and analysis steps.<

This also has the downside that it limits the possible syntax that can be used in the language, for example it makes this code impossible:
foreach (i, item in items)
Forcing the language to use this, that is a bit less readable and a little more bug-prone:
foreach (i, item; items)


>* Three, Walter Bright, the creator and original implementor of D, is an inveterate expert in optimization. <

This is probably true, despite this the backend of DMD produces not much efficient code. LDC (LLVM-backend) is generally much better in this.
Update1, Jun 17 2009: DMD (especially DMD D1) is faster than LDC in compiling code.


>Other procedural and object-oriented languages made only little improvements,<

Untrue, see Clojure and Scala. Hopefully D will do as well or better.
Update1, Jun 17 2009: both Clojure and Scala run on the JVM, so the situation is different.


>a state of affairs that marked a recrudescence of functional languages<

Some other people may talk about a reinassance, instead :-)


>SafeD is focussed only on eliminating memory corruption possibilities.<

It may be better to add other safeties to such SafeD modules.


>That makes Java and C# code remarkably easy to port into a working D implementation.<

It's indeed quite easy to port C/Java code to D. But translating C headers to D may require some work. And currently the D garbage collector is much less efficient than the common Java ones, so D requires code that allocates less often.
Update1, Jun 17 2009: there are tools that help convert C headers to D.


>such as an explicit override keyword to avoid accidental overriding,<

It's optional.


>and a technique I can't mention because it's trademarked, so let's call it contract programming.<

It's built-in in the language. It's not implemented in a very complete way, but it may be enough if you aren't used to Eiffel.


>The implementation now takes O(n) time, and tail call optimization (which D implements) takes care of the space complexity.<

At the moment only the LDC compiler (a D1 compiler) is able to perform tail-call elimination (and probably only in simple situations. But probably as LLVM improves, LDC will improve).
Update1, Jun 17 2009: I was wrong, DMD is able to tail-call optimize if the situation is simple.


>iron-clad functional purity guarantees, and comfortable implementation when iteration is the preferred method. If that's not cool, I don't know what is.<

At the moment calls to pure functions aren't moved out of loops. There can be problems if the pure function generates an out of memory exception, or if it's involved a change in the floating point rounding mode.

Functional programming juggles lot of immutable data, and this puts the garbage collector under a high pressure. Currently the D GC isn't efficient enough for such quick cycles of memory allocation, so it's not much fit yet for functional-style programming (or Java-style Object Oriented style of programming that allocates very frequently).

All this isn't meant to discourage you from using the D1/D2 languages.

-------------------------------

Update1, Jun 17 2009:
See also the discussion on Reddit:
http://www.reddit.com/r/programming/comments/8t7s1/the_case_for_d_the_other_side_of_the_coin/

Answers to the received comments:

Thank you Anonymous for your large amount of comments. I'll fix the blog post where I see it's necessary. Your comments will help me a lot in improving my blog post.


>For exception support, it's more C++'s LLVM and Windows SEH issue, to get it right.<

Eventually LLVM/Clang developers will support exceptions on Windows. Several things tell me that LDC will be a good compiler.


>As for profiler, I believe you can compile to LLVM bytecode and profile that by LLVM tools, but well, it's ugly.<

Some things are already possible (I am trying KCachegrind now), but DMD is quite more handy, you can just add a "-profile" and it just works. (Code coverage of DMD too is handy, but it doesn't work on some bigger programs of mine). Walter has said more than one time that having easy to use tools helps people use them more often.


>but what we actually want are just more tools and more mature tools.<

Command-line features like DMD profiler are enough for me in many situations.


>Well, there is actually microkernel OS in D around:<

I know, but I have read an half-serious proposal to create another compiler to compile the Linux kernel because GCC isn't too much fit for this purpose. So I guess D compilers too may be even less fit for that purpose.
On the other hand Microsoft is trying to use a modified C# to write a OS (and they say the extra safety offered by C# allows to avoid some controls in the code, and this ends up creating globally efficient enough code), so it may be doable in D too.


>D programs are somewhat bigger minimal C apps (and esp., compiled by LLVM LDC) because of 3 things:<

A GC can't be avoided, but maybe it's possible to keep it outside, dynamically linked.
The runtime contains unicode management, associative arrays, dynamic arrays and more, but it may be possible to strip away some of such things when not used.


>(as a example of such multi-level language, but I'd like to see OMeta-like stuff for D better).<

OMeta is the future :-)
See also Pymeta, Meta for Python:
http://washort.twistedmatrix.com/


>Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.<


I'd like the built-in unittest systems to be a bit more powerful, or you can of course re-implement them outside the language, but then it's better to remove the built-in unittest features. Keeping both is not good.


>That's actually matter not compiler itself, but your build system.<

The DMD compiler already has built-in things that are beyond the purposes of a normal compiler. Adding this automatic build feature isn't essential but it's handy and positive.


>Hey, that 'item in items' stuff is not D semantic, and has nothing to with compiler itself.<

D compiler is designed in several separated layers. So it seems that to change the syntax adding an "in" inside the foreach you have to add some feedback between layers, and this is seen as bad for the compiler (and probably Walter is right here).


>public/private import?<

Imports are already private by default now in D. The problems are quite more big here.


>new instaneous dee0xd<

Never seen that before.


>Arguable: dmd still compiles faster, and binary sizes are smaller. LLVM optimizations are much more promising, though.<

In most of my benchmarks LDC produces programs that are faster or much faster. DMD indeed compiles faster (DMD of D2 is a bit less fast). Binary sizes produced by LDC are sometimes bigger but they are working on this, and most times the size is similar.


>Somewhat different playgrounds here: JVM-based or self-hosted.<

You are right, the situation is different. But I think you can implement Clojure multiprocessing ideas even without a VM.


>Just stub your own GC in. There are different GC strategies after all, why to hope 'one size fitts all' on every cases?<

Indeed, JavaVM ships with more than one GC to fulfill different purposes.
My own GC is probably going to be worse than the current built-in one. I am not able to write a GC as good JavaVM ones. So what you write here is not good.


>Java GC's was much worse than Oberon's btw, when it just appeared.<

Java at the beginning was WAY worse, I know, I have stated this at the beginning of my blog post.


>And if you have many of 'quick cycles of memory allocation', something is wrong with your memory allocator. It's not better when you have lotso manual malloc/free, its better when you have memory pools, arenas, zones, and right allocation (or GC) strategy, which fits better for you app.<

If you look at most Java programs you can often see many small objects allocated in loops.
At the same way, in functional-style languages/programs you can see lot of immutable data structures that are created and collected away all the time. From my benchmarks I think the current D GC isn't fit for such kinds of code.


>So I believe we can't rely on one single GC for all use cases, but we need lotso strategies and pluggable GC's for different uses cases and different strategies.<

I agree, but probably 2-3 GCs (built-in and switchable at compile time) can be enough for most D purposes. I am sure there are many ways to improve the current D GC (for example having a type system able to tell apart GC-manages pointers, and a hybrid moving\conservative GC that pins down memory manually managed, and moves and compacts all the other memory), my purpose was just to show and talk about the current situation.


>That shouldn't stop you in any way from using D<

Of course. I don't waste hours of my time commenting about a language I don't like to program with :-)
D is my second preferred language (after Python), I like it and I have written lot of D code :-)

Thank you again for all your comments, as you see I agree with most of the things you have written here.
comments: Leave a comment Previous Entry Share Next Entry

(Anonymous)
Link:(Link)
Time:2009-06-17 12:04 pm (UTC)
Hello, nice to read your notes. Just some my $0.02(part1):

>>Other implementations are underway, notably including an a .NET port and one using the LLVM infrastructure as backend.<

>The LDC compiler (with LLVM back-end) is already usable on Linux to compile D1 code with the Tango standard lib (but it lacks the built-in profiler). On windows LLVM lacks exception support, so it can't be used yet.

And there LDC D2 is not quite usable yet, sigh :( but it's on roadmap.
Tango+D2 is not compiling yet, but some work is underway.

For exception support, it's more C++'s LLVM and Windows SEH issue, to get it right. As for profiler, I believe you can compile to LLVM bytecode and profile that by LLVM tools, but well, it's ugly. We need better support in Tango for stack backrace on exceptions (like h3r3tic's backtrace hack on Windows), better integration of exception and signals like SEH on Windows, better overall debug/profile tools (but there are some like h3r3tic's live profiler, or llvm's native tools or dmd's built-in -- just UI happens to be ugly (any volonteurs to port Java-like Eclipse stuff to Descent, anyone?))

Same applies to unittest stuff -- there are built-in to compiler 'unittest' keyword, there's a JUnit-like DUnit project, there are some in-house testing frameworks, but hell, no usable UI for them for Descent like JUnit support in Eclipse yet.

It's just a matter or time/enough of eager persons to those tools to appear -- it could be done, just like h3r3tic's stuff (like DDL, oprofiler, dynamic linker, testing framework etc) -- as libraries, but it seems poor support in runtime yet. So, you always could reinvent your wheels about the same way like exception support in C (like gtk's OO in C stuff -- your always can emulate OO in plain C or exceptions/unittests/reflection via classes, but hey, it's a kludge), your favorite testing framework, your favourite profile tools so on, but what we actually want are just more tools and more mature tools.

LLVM's based LDC makes some stuff like DDL/xxl linker quite obsolete, btw. But that tools could be done in the same plain D, as we could see from actually working projects.

Also, there are some other D compilers, like dil (written in D itself), which is currently usable mostly for doxygen-like documentation generation, or source-to-source compilation tools (like one used to port DWT to D from Java's SWT, or languagemachine's bindings for gcc trees, which could be used to transform gcc (read: gdc) generated immediate representation, or PyD / Deelight python integration, or scripting languages MiniD/Monster which are resembling D itself), and so on -- so, your definitly in better situation than in C++ toolchain, but maybe slightly worse (and I believe about the same) features than Java toolchain.


>>D could be best described as a high-level systems programming language.<

>It may be quite hard to think about using D to write something like the Linux kernel, or to write code for little embedded systems. D compiled programs are too much big for embedded systems with few kilobytes of RAM, an the D language relies too much on the GC (even if it can be switched off, etc) to be a good tool to write real-world kernel.

Well, there is actually microkernel OS in D around: http://wiki.xomb.org/index.php?title=XOmB_Bare_Bones , and it definitly could be used for embedded systems (gdc could be crosscompiled for ARM etc WinCE stuff, and I've seen some game/app toolkit for WinCE crosscompiled with cygwin from gdc)

D programs are somewhat bigger minimal C apps (and esp., compiled by LLVM LDC) because of 3 things:
- static linking (try gcc -static -llibc/libstdc++, not funky dietlib stuff, and then tell me about big binaries)
- and minimal runtime size (phobos is somewhat big, and tango is definetly too much) -- but that is libc vs. dietlibc issue,
you can create your mapfile, watch your runtime and invent diettango-alike
- runtime built-in GC (but you can link gc stub in or do manual MM just like in C, and gc won't be called until you stay minimal D subset (about the same thing like C++/embedded C++ (watch IOKit in MacOSX))
(Reply) (Thread)

(Anonymous)
Link:(Link)
Time:2009-06-17 12:07 pm (UTC)
(part2)

>A multi-level language that can be used to write code quite close to the 'metal' or to write high-level generic code too.

And there is a brilliant metaprogramming example of this -- like h3r3tic's OpenGL example (writing pixel shaders with metaprogramming).
Or, read MiniD/Deelight/so on integration to D runtime (as a example of such multi-level language, but I'd like to see OMeta-like stuff for D better).

> >support for documentation and unit testing is built-in.<

> But the current built-in support for documentation has many bugs, and the built-in unit testing is very primitive and limited

> There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default behavior, and this is bad).

Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.

> Another downside is that all current D compilers aren't able to follow the module tree by themselves to compile code, so you need to tell the compiler all the modules you need to compile, even if such information is already fully present in the code itself

That's actually matter not compiler itself, but your build system. You can gcc *.c w/o -I*.h, and still fail. You could have that dependency issues in your makefile on H files, and have to put gcc -M .deps in there. But you don't have to do in in smarter build tool, like scons, waf, rebuild/bud, dsss, so on. That's not gcc vs. dmd issue, that's more make vs. dss/scons/waf issue.

>This also has the downside that it limits the possible syntax that can be used in the language, for example it makes this code impossible:

Hey, that 'item in items' stuff is not D semantic, and has nothing to with compiler itself. You could invent templates to do that via mixins, you could preprocess your D++ language to D (or, to LLVM bytecode, or gdc's gcc GIMPLE, or LLVM/dil's C output) via some ANTLR-like tools(or languagemachine, or so on) -- but what fits that 'extensible syntax' stuff perfectly is more like OMeta parser and tools stuff, and that could be done in D, too :) Like REBOL's switchable syntax. You'll just have to generate your 'D++'-like language extension to correct parseable by backend, or to D source itself, or to C source/bytecode AST.

Exactly, but you always can reinvent your wheels. There are external tools (CandyDoc), libraries (DUnit, DDL, and somewhat for reflection support), even compilers (like dil for generating Tango documentation). Hell, you could do some funky stuff with source-to-source transformation or platform-specific stuff like annotations for D.NET.


>>D's unit of compilation, protection, and modularity is the file. The unit of packaging is a directory.<

>For example if you import the module 'foo', in the current namespace it imports not just 'foo', but all the names contained into 'foo', and the 'foo' name itself. This is silly.

public/private import?

You could do mixins, you could compile your D code on fly and link/load on runtime with stuff like DDL or with LLVM.



>Being made of compiled modules, the edit-run cycle in a D program can be as fast as in other languages like C# and Java.

That stuff is actually much more productivity boost that C++/Boost/crappy makefiles :) Rapid REPL is what we missed in C++ all that time :)
And there are the variety of make-like toolchain tools: plain make, scons, waf, D-native dsss (like maven), rebuild/bud, new instaneous dee0xd http://code.google.com/p/dee0xd/, and dmd itself is quite fast.

>On Windows you have to compile the C code with DMC to do this.

Just use same toolchain for D/C++ code: gcc/g++/gdc (svn version is somewhat better), llvm-gcc/LDC, dmd/dmc, and so on. There are some COFF/OMF issues with libraries too, so it's better to use same toolchain all over there.
(Reply) (Thread)

(Anonymous)
Subject:patch
Link:(Link)
Time:2009-06-17 12:24 pm (UTC)
>A multi-level language that can be used to write code quite close to the 'metal' or to write high-level generic code too.

And there is a brilliant metaprogramming example of this -- like h3r3tic's OpenGL example (writing pixel shaders with metaprogramming).
Or, read MiniD/Deelight/so on integration to D runtime (as a example of such multi-level language, but I'd like to see OMeta-like stuff for D better).

You could do mixins, you could compile your D code on fly and link/load on runtime with stuff like DDL or with LLVM.



>Being made of compiled modules, the edit-run cycle in a D program can be as fast as in other languages like C# and Java.

That stuff is actually much more productivity boost that C++/Boost/crappy makefiles :) Rapid REPL is what we missed in C++ all that time :)
And there are the variety of make-like toolchain tools: plain make, scons, waf, D-native dsss (like maven), rebuild/bud, new instaneous dee0xd http://code.google.com/p/dee0xd/, and dmd itself is quite fast.

>On Windows you have to compile the C code with DMC to do this.

Just use same toolchain for D/C++ code: gcc/g++/gdc (svn version is somewhat better), llvm-gcc/LDC, dmd/dmc, and so on. There are some COFF/OMF issues with libraries too, so it's better to use same toolchain all over there.


> >support for documentation and unit testing is built-in.<

> But the current built-in support for documentation has many bugs, and the built-in unit testing is very primitive and limited

Exactly, but you always can reinvent your wheels. There are external tools (CandyDoc), libraries (DUnit, DDL, and somewhat for reflection support), even compilers (like dil for generating Tango documentation). Hell, you could do some funky stuff with source-to-source transformation or platform-specific stuff like annotations for D.NET.


>>D's unit of compilation, protection, and modularity is the file. The unit of packaging is a directory.<

>For example if you import the module 'foo', in the current namespace it imports not just 'foo', but all the names contained into 'foo', and the 'foo' name itself. This is silly.

public/private import?

> There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default behavior, and this is bad).

Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.

> Another downside is that all current D compilers aren't able to follow the module tree
(Reply) (Parent) (Thread)

(Anonymous)
Link:(Link)
Time:2009-06-29 11:50 pm (UTC)
>> There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default behavior, and this is bad).

>Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.

But the idea behind having a good compiler/api/framework is not (!!!) to reinvent the wheel!!

Sure, you can do a lot of hacks to get something going but Leonardo is just right here.

The module system is just fucking broken! It's not the point having a module system which does not really work and make a lot of hacks so that it somehow works.

This is really a big flaw since ancient times and should be fixed with high priority. This is one of the little things why d not has conquered the world yet. And until these things are not fixed it never will ...


>> Another downside is that all current D compilers aren't able to follow the module tree by themselves to compile code, so you need to tell the compiler all the modules you need to compile, even if such information is already fully present in the code itself

>That's actually matter not compiler itself, but your build system. You can gcc *.c w/o -I*.h, and still fail. You could have that dependency issues in your makefile on H files, and have to put gcc -M .deps in there. But you don't have to do in in smarter build tool, like scons, waf, rebuild/bud, dsss, so on. That's not gcc vs. dmd issue, that's more make vs. dss/scons/waf issue.


But i think Leonardo is right here again. Sure for bigger programs you will need some kind of build system but a lot of other languages out there show that for simple programs (even ones which use external libraries) the compiler is smart enough to figure out which modules are needed and does everything by itself. So that again could be done by the d compiler as well. There is just no reason why not and it would make things easier.

> That stuff is actually much more productivity boost that C++/Boost/crappy makefiles :) Rapid REPL is what we missed in C++ all that time :)
And there are the variety of make-like toolchain tools: plain make, scons, waf, D-native dsss (like maven), rebuild/bud, new instaneous dee0xd http://code.google.com/p/dee0xd/, and dmd itself is quite fast.

Sure it always depends from which language you come from. When you are used to the C++ makefile / build-system nightmares (not even thinking about multiple platforms :-( ) then everything else is really great but when you look at all the other languages how easy they make it to build programs then again you wish something similar for d too ...
(Reply) (Parent) (Thread)

(Anonymous)
Link:(Link)
Time:2009-06-17 12:09 pm (UTC)
(part3)
> >support for documentation and unit testing is built-in.<

> But the current built-in support for documentation has many bugs, and the built-in unit testing is very primitive and limited

Exactly, but you always can reinvent your wheels. There are external tools (CandyDoc), libraries (DUnit, DDL, and somewhat for reflection support), even compilers (like dil for generating Tango documentation). Hell, you could do some funky stuff with source-to-source transformation or platform-specific stuff like annotations for D.NET.


>>D's unit of compilation, protection, and modularity is the file. The unit of packaging is a directory.<

>For example if you import the module 'foo', in the current namespace it imports not just 'foo', but all the names contained into 'foo', and the 'foo' name itself. This is silly.

public/private import?

> There are also troubles with circular import semantics, package semantics, safety (it lacks a syntax to import all names from a module. That's the default behavior, and this is bad).

Exactly, but you always can reimpement your wheels (read: modules/packages via classes, and some design pattern around that), and feed them thru CTFE/mixins.

> Another downside is that all current D compilers aren't able to follow the module tree by themselves to compile code, so you need to tell the compiler all the modules you need to compile, even if such information is already fully present in the code itself

That's actually matter not compiler itself, but your build system. You can gcc *.c w/o -I*.h, and still fail. You could have that dependency issues in your makefile on H files, and have to put gcc -M .deps in there. But you don't have to do in in smarter build tool, like scons, waf, rebuild/bud, dsss, so on. That's not gcc vs. dmd issue, that's more make vs. dss/scons/waf issue.

>This also has the downside that it limits the possible syntax that can be used in the language, for example it makes this code impossible:

Hey, that 'item in items' stuff is not D semantic, and has nothing to with compiler itself. You could invent templates to do that via mixins, you could preprocess your D++ language to D (or, to LLVM bytecode, or gdc's gcc GIMPLE, or LLVM/dil's C output) via some ANTLR-like tools(or languagemachine, or so on) -- but what fits that 'extensible syntax' stuff perfectly is more like OMeta parser and tools stuff, and that could be done in D, too :) Like REBOL's switchable syntax. You'll just have to generate your 'D++'-like language extension to correct parseable by backend, or to D source itself, or to C source/bytecode AST.
(Reply) (Thread)

(Anonymous)
Subject:part4, final
Link:(Link)
Time:2009-06-17 12:10 pm (UTC)
>>* Three, Walter Bright, the creator and original implementor of D, is an inveterate expert in optimization. <

>This is probably true, despite this the back-end of DMD produces not much efficient code. LDC (LLVM-back-end) is generally much better in this.

Arguable: dmd still compiles faster, and binary sizes are smaller. LLVM optimizations are much more promising, though.


>>Other procedural and object-oriented languages made only little improvements,<

>Untrue, see Clojure and Scala. Hopefully D will do as well or better.

Somewhat different playgrounds here: JVM-based or self-hosted. So, other language's progress is much more limited by JVM's restrictions as platform than by languages itself (like tail call optimization, or maximum method size in JVM).

>>That makes Java and C# code remarkably easy to port into a working D implementation.<

>It's indeed quite easy to port C/Java code to D. But translating C headers to D may require some work.

Hey, that's what tools are for. You could check up OpenMorrowind project, which has C to D bindings, or QtD's QtJambi hack to produce D binding/wrappers to C++ Qt classes (instead of original Java ones).

Also, why not use something like gcc's libffi.

>And currently the D garbage collector is much less efficient than the common Java ones, so D requires code that allocates less often.

Just stub your own GC in. There are different GC strategies after all, why to hope 'one size fitts all' on every cases?
Java GC's was much worse than Oberon's btw, when it just appeared.

>Functional programming juggles lot of immutable data, and this puts the garbage collector under a high pressure. Currently the D GC isn't efficient enough for such quick cycles of memory allocation, so it's not much fit yet for functional-style programming (or Java-style Object Oriented style of programming that allocates very frequently).

Wrong point. STM could be done in Clojure, it could be done in D. It's not GC allocation what fits better for functional-style programming, it's that immutable thing. And if you have many of 'quick cycles of memory allocation', something is wrong with your memory allocator. It's not better when you have lotso manual malloc/free, its better when you have memory pools, arenas, zones, and right allocation (or GC) strategy, which fits better for you app.
So I believe we can't rely on one single GC for all use cases, but we need lotso strategies and pluggable GC's for different uses cases and different strategies. That could be done in monadic way in Haskell, that could be done in pluggable GC in D runtime (with accurate GC interface).

>>>All this isn't meant to discourage you from using the D1/D2 languages.

Really, D language itself is fun. It's fun to code with, it's fun to tinker about, though it suffers somewhat from community issues: lots of stale/dead projects/v0.01(read:not mature enough, but still fun projects), 2 versions of common runtime libraries(tango/phobos/druntime compatibility issues), 2 versions of language itself (D1/D2 compatibility issues), 2 versions of somewhat mature compilers (read: D1/LDC vs. dmd), 2 versions of everything.

That shouldn't stop you in any way from using D: you should practice yourself, make one or two or some more your current C++/Java/C# project in D, watch the issues, drawbacks, tradeoffs. And fun. And make your own decision, wherether it fits your needs or don't. Anyway, 'Lisp learning would make your more enlighted to the rest of your life, even if you'll never program in Lisp anymore' -- and D programming will make your life happier, even if you won't paid for that yet. It's lean and mean, it's do metaprogramming, and it's fun.
(Reply) (Thread)

(Anonymous)
Subject:Re: part4, final
Link:(Link)
Time:2009-06-30 12:00 am (UTC)
> Arguable: dmd still compiles faster, and binary sizes are smaller. LLVM optimizations are much more promising, though.

But the executables compiled by dmd run slower because of the old symantec c++ compiler (now digital mars c++ compiler) backend which lacks a lot of optimizations which modern c++ compilers do (see http://www.agner.org/optimize/ optimizing c++). So since d is not getting any new backend soon hope for c++ like speed lives only in gdc (which seems to be abandoned) and ldc. Now the problem still exists that they all share the same frontend which is also not optimizing well (look at stuff like inlining).

And then there is also the problem with the non-optimal garbage collection which is not optimized for d which share all of the three implementations :-(
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re: part4, final
Link:(Link)
Time:2009-06-30 12:46 am (UTC)
>>It's indeed quite easy to port C/Java code to D. But translating C headers to D may require some work.

>Hey, that's what tools are for. You could check up OpenMorrowind project, which has C to D bindings, or QtD's QtJambi hack ...

Now where are the tools ? HtoD is the only one i know of but it works only on windows and the things you mention here are not tools but already converted files which are useless when you want to create bindings for new c libraries.


>>And currently the D garbage collector is much less efficient than the common Java ones ...

>Just stub your own GC in. There are different GC strategies after all, why to hope 'one size fitts all' on every cases?
Java GC's was much worse than Oberon's btw ...


Now you're a funny man. Since we all have a large amount of highly tuned and d optimized garbage collectors lying around on our harddisks it is not a problem at all just to switch the fitting one right into our newest project.

I don't wan't to sound mean so please apologize but the thing (as you already pointed out in you last sentence) is that it takes a large amount of time and skills to develop a really optimized garbage collector for a new programming language. That is the reason why a lot of languages (including d) rely on the boehm garbage collector (mono does this too and they have a lot more developers than d!). It took a lot of time even for java and .net to get highly tuned garbage collectors and these are really big companies with a lot of developers behind it. And even the mono guys are working since a long time on a new garbage collector (generational see: http://mono-project.com/Compacting_GC ) since the boehm-weiser garbage collector is really a big speed bottleneck for the runtime and it is clearly a bottleneck for d too ! So if you already have an alternative high-speed-generational-compacting garbage collector ready for d usage then please submit it to Walter so we can all take advantage of it.

> So I believe we can't rely on one single GC for all use cases, but we need lotso strategies ...


This is all true, but the thing is they all have to be implemented. And since we have, as far as i know, two garbage collectors (tango and the original d boemers-weiser) which are both far from optimal, work should first be centered on one highly-tuned solution which is comparable with .Net and java. After that we can create millions and millions of other collectors.

> Really, D language itself is fun ... It's fun to code with, it's fun to tinker about, though it suffers somewhat from community issues: lots of stale/dead projects/... 2 versions of common runtime libraries ... 2 versions of somewhat mature compilers

The reason for having a lot of dead projects lies in the lack of a stable version of the language for a long time. There are so many dead projects created from really skillfull programers because a lot of them just were fed up by having to rework their code over and over again just because the language changed again. It's not a big deal changing your code the first time, not the second time, maybe not the third time but then start loosing the motivation to just go on and on.

> 2 versions of common runtime libraries

Big YES! 2 versions of a "STANDARD LIBRARY" is really a contradiction in terms. This is really one of the worst things which can happen to any programming language. This splits third-party-libraries in two different groups, programers, projects and so on and is really a major blocker for d development.

And the bad thing is that they are both so different. Phobos is a low-level c-style library (at some points a really bad hack) with very basic functions but which is really great when you have to port old c or c++ code. Phobos is the better designed high-level one which features a lot of things you expect from a modern standard library. But i want both (because i need all features) and i need it on all plattforms for all compilers and for all libraries!

> 2 versions of somewhat mature compilers (read: D1/LDC vs. dmd)

I wish that would be true! (It depends on your definition of "somewhat".) I really wish they would be somewhat more mature (it's still beta)!


>That shouldn't stop you in any way from using D ...

That is all so true. There is nothing to add.
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re: part4, final
Link:(Link)
Time:2009-06-30 07:59 pm (UTC)
>Now where are the tools ? HtoD is the only one i know of but it works only on windows and the things you mention here are not tools but already converted files which are useless when you want to create bindings for new c libraries.

BCD, http://www.dsource.org/projects/bcd .
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re: part4, final
Link:(Link)
Time:2009-06-30 10:54 pm (UTC)
Thanks for the info. I will definitively try it out soon since i want to play around with some projects which need bindings in linux with d.

Please don't mind the harsh words. It's just the frustration of seeing that such a great language does not get the attention it deserves because of the lack of a good implementation.

And the really sad part is, that the focus is not shifted in making d 1.0 really a 1.0 version but on creating newer and newer versions with more and more features (d 2.0 which is of course more fun).


There is a mistake here:

"Phobos is the better designed high-level one which features a lot of things you expect from a modern standard library."

I meant "Tango is the better designed ..." of course :-)
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re1:Update1, Jun 17 2009
Link:(Link)
Time:2009-06-28 12:55 pm (UTC)
> Thank you Anonymous for your large amount of comments.

I hope that amount wasn't just TOO much :)) sorry for formating/errors/grammar spelling/whatever and for having to read such a long post anyway -- i had to split it to several post to fit thru 1 post size limitation.
I appreciate the point of your initial post -- to clarify some things (myths/hype) based on real experience first. Doing things, don't just speculate on them.

> Walter has said more than one time that having easy to use tools helps people use them more often.

That's exactly the point of built-in unittests, built-in profiler, gc, maybe assoc arrays implementation too -- not especially the best implementation of features out-of-the-box, but usable enough and it "just works". And you can plug in your own, if you're struck with built-in's limitations.

>Command-line features like DMD profiler are enough for me in many situations.

Yeah, but some new adopters wants to have the same level of comfort, as, say for example, in Java (like debugging/profiling/JUnit). Or say, SharpDevelop 3's profiler(or, NUnit). Or say, IronPython's debugger (and integration to Sharp# too). There's a fine devhawk's blog on IronPython's CLI integration, and the final result is as simple as things "just work" in Sharp#.
There are kind of things which should "just work" in IDE like D-in-a-box,too (in the same analogy like Lisp-in-a-box,Emacs-in-a-box distros).
So Eclipse/Descent plugin (or better, an plugin for Poseidon) when implemented all those integration stuff will bring another popularity boost for D community, I believe. Just from Sharp#'s user POV: it doesn't matter what headaches toolchains'/IronPython's/F# authors came through to make that integration, final result is as simple as i dloaded recent .net framework,Sharp#, F#, and has all those repl console in my IDE. It's a .NET-in-a-box, Sharp# is.
Maybe, graph2dot + some perl/python script to parse profiler text output and wrap it in nicer graph or XML for Eclipse is enough :)) And there's xfProfiler for windows http://wiki.team0xf.com/index.php?n=Tools.XfProf if you're looking for some GUI.
Cachegrind is ultimate, though.
Just ranting, though. Typing this post in the same old Emacs :)
(Reply) (Thread)

(Anonymous)
Subject:Re: Re1:Update1, Jun 17 2009
Link:(Link)
Time:2009-06-30 01:08 am (UTC)
>Yeah, but some new adopters wants to have the same level of comfort, as, say for example, in Java ...


And that is the whole point of it all. Sure, d gives you a lot of possibilities "to reinvent the wheel" but the thing is that for most of the other popular languages you don't have to. And it is not a good thing to have to reinvent the wheel over and over again. One of the goals of d is to have the power of c++ but be far more productive.

And it is far more productive. But what is it good for if all of the productivity boost gets eaten up by wasting a lot of time fiddling around with the lack of a decent toolchain, a mature and good optimizing compiler, a working linker, good debugging and so on.

The other popular languages offer all of this. And d has to compete with them to get popular and used (even more for company use).


So everytime i see posts like these from fans of the language which offer a lot of hacks (reinventions of the wheel) as "solutions" for problems with the language and the toolchain it makes me a bit sad.

We all know that d is the greatest language on earth but it has a lot of problems still. And this is not the end of the world. Admitting this and working to make things better is the way to go. Not denying problems and offering hacks as a solution for obvious flaws.

D has to come to the maturity of java or c# regarding compiler, toolchain, third library-support and so on. Otherwise it will never succeed and we are doomed for system-level-stuff to program in c++ forever. Now that is really the end of the world ...
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re2:Update1, Jun 17 2009
Link:(Link)
Time:2009-06-28 12:58 pm (UTC)
>I know, but I have read an half-serious proposal to create another compiler to compile the Linux kernel because GCC isn't too much fit for this purpose.

What makes GCC the best tool for OS kernel are configurable ld scripts, and things like elfweaver. Linux kernel itself (proper version) could be compiled with tcc (by Fabrice Bellard, author of QEMU) -- on fly, while booting from bootmanager. There are also icc (Intel's C++) patches for 2.6 kernel around -- to fix some gcc-ism in kernel source.
Pcc, lcc, tcc and plan9's 8c are both somewhat faster to run than gcc :)

>So I guess D compilers too may be even less fit for that purpose.

Depends on interop you need with other parts of OS. OS kernel itself could be done in Lisaac (check out their Isaac os and Lisaac language benchmark), C++ (check out BeOS/Haiku, L4 and so on), Haskell (check out house os - it conforms to L4 specs i believe), Oberon (Oberon OS), Lisp (Movitz,etc), so on. D is not much worse than C++ for that purpose:) -- actually embedded C++ is subset like SafeD of whatever stable D1/D2.

Read Xomb microkernel sources, L4 based (C++) microkernels, so on.

Hard things are drivers - which are actually kernel extensions, so they have to in the same language/api as kernel exports or to be modularized like microkernel/userspace drivers + message passing for api. Or hypervisor-like approach which becomes popular in L4 with paravirtualized L4:Linux drivers. Or, miniVM exported to kernel - like Scheme VM in JariOS.

Everyone really interested in OSdev stuff definitely should read the tutorial http://wiki.osdev.org/D_Bare_Bones , which is plain port of C tutorial to gdc, and everything under http://wiki.osdev.org/Category:Bare_bones_tutorials, Esp. C++ barebones stuff and their relationship to D vs. C++ stuff. Study toolchain issues (like gcc crossdev), try to port it to another toolchain (say, llvm-gcc/ldc, or dmc/dmd -- but latter would be harder). Follow gcc/crosdev/newlib porting guidelines, and try to port druntime in the same way. Maybe try to port DDL approach (or LLVM's) to load/unload modules according to http://wiki.osdev.org/Modular_Kernel.
GC support in kernel in D is really non-issue: 1) there are oxymoronic things like 'realtime gc' out there 2)if you follow bare bones tutorial, you'll have to implement manual memory mgmt anyway, by overloading new/delete -- pretty same way in D as C++. In C++ barebone, you have to avoid RTTI/exception/stack unwinding related stuff, so you write your kernel in safe C++ subset. In D barebone, you also have to avoid GC to occur, by overloading your empty gc stub, and by 'safe D subset' too.

So, main pains are (really toolchain's/libc/druntime issues):
1) linker scripts to be OS kernel's image loaded by bootmanager --say, grub -- but there are microkernel OS'es with ELF image around and tools like elfweaver
2) implement kext load/unload like insmod/rmmod (Modular kernel) -- but, there are tools even for C to load/unload/compile-on-the-fly modules (say, `C) , not to mention tools like DDL and/or xfLinker or LLVM-JIT in D or even reinvent-your-wheels approach http://members.shaw.ca/burton-radons/The%20Joy%20and%20Gibbering%20Terror%20of%20Custom-Loading%20Executables.html(heck, that guy reimplementing dynamic library's wheels by loading executable image by hand -- but you have all the flexibility and full control on plugins/load/unload, even things like loading COFF windows image under Linux);
3) export libc api or whatever std runtime is, "porting newlib stuff to new kernel" translated to minimal D runtime -- but D already support C cdecl calling conventions, so no need for FFI, so C newlib porting applies here, just maybe need fancy C wrappers around D code in case you'll need to export your D kernel stuff to C :)
4) implement drivers (in C api or highlevel D). Tough thing, but there are some DevIL-like approach (http://hal.archives-ouvertes.fr/docs/00/07/24/90/PDF/RR-4136.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.6268, "Improving Driver Robustness: an Evaluation of the Devil Approach" )with rule-based bytecode executable specifications or IOKit ported to D (which is DDK in C++) or L4-like microkernel with paravirtualized os support (likeL4:linux), and use that paravirtualized drivers.
(Reply) (Thread)

(Anonymous)
Subject:Re3:Update1, Jun 17 2009
Link:(Link)
Time:2009-06-28 12:59 pm (UTC)
>On the other hand Microsoft is trying to use a modified C# to write a OS

Singularity OS with Sing#, which resembles me much a Inferno OS with Limbo language. Limbo and Sing# uses same approach for channels to communicate with different OS parts, so one can implement something like protocol buffers api for microkernel message passing.

>A GC can't be avoided, but maybe it's possible to keep it outside, dynamically linked.

std.gc.disable(). Or there's GC stub which could be linked against binary (like http://dsource.org/projects/tango/browser/trunk/lib/gc/stub/gc.d), and GC itself is called in few cases which could be avoided. Knuth did all memory management via arrays in Tex in CWeb language -- certainly with no GC in Pascal-like CWeb, so GC could be avoided in similar usecase (real men could write fortran program in any language) :).

> OMeta is the future :-)
> See also Pymeta, Meta for Python:
> http://washort.twistedmatrix.com/

These are links that want me some itches to scratch: implementing language in Python/LLVM http://llvm.org/pubs/2009-05-21-Thesis-Barrett-3c.html (imagine PyMeta + this), or Deelight/PyD/CeleriD which are seems to be a nice way to integrate Python with D, or Boo language on .NET which has PEG library support, or C# OMeta implementations projects like OMeta# or IronMeta -- which are on .NET 3.5, C# 3.0 with all that LINQ and lamdba syntax which has to be emulated good enough via D delegates or maybe D2 std.algorithm :)
And project like D.NET, which could be used to bootstrap yet to-be-done "DMeta" port from OMeta# or IronMeta via .NET framework (Mono?) (or is it just too raw yet?).
And, there's LLVM implementation of .NET -- VMKit LLVM project, and theoretically .NET app like OMeta# could be linked with LDC-compiled D app right now in VMKit.

Just a pity that I had more a daily job to do, not to hacking with D/OMeta alone itself :)
(Reply) (Thread)

(Anonymous)
Subject:Re4:Update1, Jun 17 2009
Link:(Link)
Time:2009-06-28 01:02 pm (UTC)
>or you can of course re-implement them outside the language, but then it's better to remove the built-in unittest features. Keeping both is not good.

C# actually requires no special syntax but has class annotations. Which are so useful for reflection and introspection of metadata.
I don't see why to have both syntaxes is a problem, though -- it's a question like why GC is in std library while associative arrays implementation is built-in in language (and what if you need another hash function for assoc arrays). So, maybe better way is if built-in constructs could trigger std lib ones, which could be overridden just like with GC implementations.

>The DMD compiler already has built-in things that are beyond the purposes of a normal compiler

:-)

>Adding this automatic build feature isn't essential but it's handy and positive.

Like in-memory linkage of multiple source, maybe in separate threads -- why is this in compiler?
Well, it's faster -- but it doesn't has to be there, it could be in linker/build system, too.
OTOH, handling this C-to-H stuff in C toolchain was done in build system but not in compiler because of crappy C compiler itself, which hasn't proper modules. Plan9's C compiler didn't have to specify -llibrary, it has specific #pragma lib to link with proper library in h-files. So, it was a clunky modules support in C and toolchain stuff that has just to be done right in D.

> So it seems that to change the syntax adding an "in" inside the foreach you have to add some feedback between layers, and this is seen as bad for the compiler (and probably Walter is right here).

Maybe LDC is suited better for this. You don't have to fiddle with other layers if all your syntax extension compiles to the same D.
BTW, this is interesting -- which way aspects could be implemented in D? Mixins/std.algorithm way?
Some Boo examples with PEG (or some OMeta implementation) handle this nicely with annotations in .NET. Annotation loads new syntax to compiler before source block in that syntax; It's Boo or OMeta parser frontend that's extenisble enough.
Dylan macros handle this in other, but extensible way too: http://mike-austin.com/blog/2005/10/dylan-macro-system-is-badass.html

What feedback do you need for 'item in container'? When we already have foreach with same semantics, and just have to make 'in' macro to already available stuff? Like to just translate 'foreach(item in container){..}' to 'foreach(item;container){..}' via mixins, opApply and friends
Although, this is just a generic example. So maybe semantic layer for new syntax will be needed ( it's uncertain where to plug it in DMD, but much more modular in LDC).
(Reply) (Thread)

(Anonymous)
Subject:Re5:Update1, Jun 17 2009 , final
Link:(Link)
Time:2009-06-28 01:03 pm (UTC)
>>new instaneous dee0xd<

>Never seen that before.

Forum, and some benchmarks: http://talks.dprogramming.ru/index.php?board=11.0 (this is in russian, so google translate it).
Homepage http://code.google.com/p/dee0xd/
In short, people have built gtkd, DWT, tango stuff with dsss (which is more like maven than ant -- that is, more like CPAN to download than tool like make to build) -- and have got time benchmarks like 0.5..10x times to 'dsss build' on Windows (on Linux much more modest difference). Tool itself is written in D, though has somewhat less features than dsss/maven or rebuild/scons/ant currently, but it does it's job.

> Binary sizes produced by LDC are sometimes bigger but they are working on this, and most times the size is similar.

It's just amazing that DMD is a one man show, and still has decent optimizer, while LDC's optimizations are LDC itself + LLVM passes. LDC just *have* to be faster with all those optimizations and manpower. My point is that DMD itself is fast enough for daily usage -- though I didn't do any benchmarks yet.

>Java at the beginning was WAY worse, I know, I have stated this at the beginning of my blog post.

But it has bootstrapped, and now it has all those tools, bells and whistles around. And it was somewhat better than C++ when it appeared (I know COM, ATL, STL, Boost, ACE -- but it's kind of spooky useless knowledge which most around there seems to run away from by any cost, even by Java). And Lisp hasn't much of these tools still, though it's 50 years around .
Just Java perfomance is good enough for current hardware (and newer JVMs too).
So, C++ seems like a dead end, and C#/Java just copy features from each other. Seems like some kind of "An Innovator Dilemma" to fail ones and sucsessfully bootstrap others.

> From my benchmarks I think the current D GC isn't fit for such kinds of code.

It would be nice to benchmark some other available GC implementations too.

>D is my second preferred language (after Python)

What do you thing about Deelight, PyD, CeleriD -- and all that D/Python integration stuff?
(Reply) (Thread)

(Anonymous)
Subject:Re: Re5:Update1, Jun 17 2009 , final
Link:(Link)
Time:2009-06-28 07:00 pm (UTC)
>> From my benchmarks I think the current D GC isn't fit for such kinds of code.

>It would be nice to benchmark some other available GC implementations too.

maybe even something more flexible GC like MPS:
http://www.ravenbrook.com/project/mps/doc/2002-01-30/ismm2002-paper/
(Reply) (Parent) (Thread)

(Anonymous)
Subject:Re: Re5:Update1, Jun 17 2009 , final
Link:(Link)
Time:2009-06-29 11:26 pm (UTC)
>It's just amazing that DMD is a one man show, and still has decent optimizer, while LDC's optimizations are LDC itself + LLVM passes. LDC just *have* to be faster with all those optimizations and manpower. My point is that DMD itself is fast enough for daily usage -- though I didn't do any benchmarks yet.

It depends on what you mean with daily usage. If this means for you that it is enough when it has the same speed as mainstream languages like java or c# or anything like that then dmd generates executables which are fast enough.

But the whole point of d is being a kind of c++ replacement (c++ done right). A real system-programming-like language and not another vm language (otherwise we could use scala (which is really nice), boo, nemerle and so on).

Therefore one of the major goals of d has to be creating executables which have nearly the same speed as c++ ones.

The whole thing is that this should not be that hard, because dmd and gdc are basically c++ compilers just with another frontend (the d one) since Walther recycled his old Symantec C++ compiler for d.

But there is one problem. The Symantec C++ compiler (now digital mars c++ compiler) is quite old and is a c++ compiler which was state of the art back at the end of the 1990's. So it lacks a lot of optimizations which newer compiler, like gcc, icc, msvc and so on already have.

Take a look at http://www.agner.org/optimize/ at the c++ optimization manual. There you can find a comparison of the optimization facilities of all modern c++ compilers (including digital mars) in chapter 7. And just to make it short, digital mars optimization facilities are really lacking.

Ager Fog comes to a conclusion on the digital mars compiler which reads:

"This is a cheap compiler for 32-bit Windows, including an IDE. Does not optimize well."

And when you take a look at Leonardo's simple pathtracing benchmark where he compared the speed of the c++, gdc, ldc and dmd executables you will see that dmd is really appreciable slower.

And then just think about that gcc and gdc differ just in their frontends and look at the speed difference (they could theoretically have the same speed, since it is the same codegenerator and backend optimizer, but sure you have keep in mind that d uses a not optimal garbage collection (the Boehm-Demers-Weiser one)).

Too make it short. dmd really has some problems with optimizations but the really bad thing is that a lot of non optimal code seems to be generated in the frontend which has also impact on gdc and ldc which is really a problem (garbage in garbage out syndrome).

I don't wan't to bash on d (because i love it) but there has still a lot of work to be done so that it can keep it's promises. And i would really like to see that more work is put in D 1.0 and it's toolchain so that it really deserves this version number and one nearly bug-free good optimizing compiler with a good debugger, working linker, great standard library (just one) and a productive ide is more worth than two versions which both are beta quality.
(Reply) (Parent) (Thread)

The Case for D - the other side of the coin - leonardo
View:Recent Entries.
View:Archive.
View:Friends.
View:Profile.
View:Website (My Website).