martedì 31 agosto 2010

Kubuntu o KDE?

Ancora una volta mi trovo nella situazione di dover pensare ad un desktop switch: da KDE a Gnome. Complice Kubuntu e il nuovo nato KDE 4.5. Che è fantastico, ma la combinazione dei due precedenti mi porta ad un desktop praticamente inusabile. E sembra che non sia il solo ad avere problemi.
Non mi resta che provare e valutare se lasciare questo fantastico e innovativo desktop (che a mio avviso è avanti anni luce gli altri) o se abbandonarmi a un desktop piu' scarno e funzionale.

Maxine linkage/JNI errors

After a source update of the Maxine's Mercurial clone, I was unable to run the virtual machine again. The machine exited with signal 11, even after a clean-build. I increased the debug messages and discovered the error:

loadSymbol(0x7f887f161128, "nativeSetGlobalThreadAndGCLock")
loadSymbol(0x7f887f161128, "nativeSetGlobalThreadAndGCLock") = (nil)
Error message: /sviluppo/java/src/maxine-
 undefined symbol: nativeSetGlobalThreadAndGCLock

Thanks to the help on the mailing list, I was able to find that the I was using was older than the Maxine executable I compiled, so there was a name clash on some variables. Here there is how I solved the problem:
  1. do a max clean to remove all generated objects;
  2. remove the whole content of the Native/generated/linux directory;
  3. do a max build to rebuild the virtual machine;
  4. launch again the boot image generator to get the maxine.vm image into the Native/generated/linux directory.

lunedì 30 agosto 2010

Maxine: increasing the machine debug messages

It is possible to increase the messages produced by the virtual machine while in the native code editing the file Native/share/log.h and changing the macro value

#define log_ALL 0 // log disabled

to a true value

#define log_ALL 1 // log enable

domenica 29 agosto 2010

PostgreSQL (italiano) & Social Networks

Questo è un brevissimo elenco dei gruppi relativi a PostgreSQL che si trovano sui social network e nei quali è possibile discutere in italiano:

Unitamente a quelli sopra citati, è possibile trovarne altri internazionali sui vari social network.

giovedì 26 agosto 2010

Maxine Native Launcher in Eclipse

The Maxine VM can be launched from the command line using the max script, but it can also be launched within Eclipse thru the Native Launcher. This can be really useful if you are developing a Java program or an extension and you want to test it without exiting the IDE.
In order to create a launcher for Maxine, configure the Native Launcher entering the External Tools configurations from the menu, as shown below.

Then create a new configuration, that I called MaxineVM Launcher, and set the Location to the workbench directory ${workspace_loc:/bin/max}. This requires that you have imported all the Maxine projects into the workspace.
Give the configuration the name you want and in the Arguments section place the vm argument (that instruments the max script to start the virtual machine and the classpath to your project classes and qualified class name you want to execute). For instance, if you want to start the class myproject.main.Main of the project MyProject you have to specify:

-classpath ${workspace_dir}/MyProject/classes/myproject.main.Main

This of course assumes that the classes are placed in the classes directorty, otherwise you have to specify the path where your compiled classes are.

Now if you are not doing any change to the Maxine VM sources you can avoid that the launcher triggers a full workspace rebuild, that could be really long, so you can set the launcher in order to build only your project from the selected ones.

Finally, under the Environment tab, you have to specify the JAVA_HOME and JUNIT4_CP variables that the Maxine max script expects to find and use. Having done this configuration, you will be able to start the VM from the IDE.

martedì 24 agosto 2010

Eclipse Helios e KDE 4.5

Dopo l'aggiornamento al KDE 4.5 sulla mia Ku buntu 10.04 Eclipse Helios (3.6) ha iniziato ad andare in crash. Sostanzialmente ogni qual volta si espandeva una directory o un package con molti file/classi l'IDE usciva bruscamente. Questo è un bug che anche io ho segnalato, la soluzione per il momento consiste nel disabilitare il controllo sulla malloc/free (è proprio una free sbagliata a causare il crash) esportando la variabile MALLOC_CHECK_ con valore zero. Nel caso si avvi il programma da terminale o da script è semplice esportare la variabile, qualora invece lo si avvia tramite icona del desktop è necessario inserire il valore della variabile prima del comando vero e proprio di Eclipse come mostrato nella figura seguente.

giovedì 19 agosto 2010

Be serious: don't use CRAPL! Use Open Source licenses!

Surfing the Web I found another programming license: the CRAPL (Community Research and Academic Programming License). I don't know if the author is serious about this license, it seems he is,  or if he is joking, however I consider the license valid in the following discussion since it raises up a concrete problem about the coding done under research.
The idea of this license is good: forcing academy to release its code as Open Source. The way is tremendously bad: there is no need, in my opinion, for another Open Source like license. 
The aim of CRAPL seems to be the removal of embarrassment of the academic world for the publication of not-quality code due to deadline releases. The idea is to release the code even if it is just a proof of concept without being annoyed with code reviews, design patterns, and stuff like that.
This is crap! It is really controversial!

First of all consider the amount of theory (and practice) in the design and development of software and all the techniques that academy tries to force into industry, for instance eXtreme Programming (XP). A natural question could be: why are you teaching me this stuff if you are the first that do not use it? Academy cannot teach about quality when it is producing low quality software!

Moreover there is no need to use another license at all! If you want to release the code, use an existing license, in particular BSD if your code is really advanced or a GPL like if you are afraid someone can stole it and do a commercial product based on it.
Moreover consider the clause 2:
You are permitted to use and/or modify the Program for the
   validation of novel scientific claims if You make a good-faith
   attempt to notify the Author of Your work and Your claims prior to 
submission for publication.
What does good-faith mean? Moreover, I cannot see any clause that the authors must be reported on modification, as almost every Open Source license states. This is a really important part, since it happened to me that some of my code was not released as Open Source and has been kept by the University I worked, that was so able to do what it wanted with it (even destroy it!).

I really think academy should stop being embarrassed by low-quality code, and should start to produce a good quality Open Source culture. After all, I'm not surprise to see TODOs and FIXMEs in the code, I'm much more surprised to see projects without code.
Finally, I don't find embarrassing to produce a proof of concept and release it as Open Source when the code is enough mature (or beauty, or quality, or your-favorite-adjective-here). But it is important that there is a deal to release (soon or later) the code as Open Source! And it is for this that I cannot understand how the research community can still do a peer review over code that is not (or will not) available in any way! In my opinion, every work that produces or derives code, should introduce the license and the terms of the code itself.

Don't think about embarrassment, code and release at your best. 
Someone else will be proud to fix your ugly-code!

Maxine VM & Eclipse

This post is a brief howto for the installation of the Maxine VM in order to be used from Eclipse. This post is not meant to substitute the official howto, but to add a few details on the installation instructions obtained from my experience in this process.
I personally did the following steps using my Ubuntu Linux 10.4 (kernel 2.6.32-22) x86_64, using Eclipse Helios (3.6), Mercurial 1.4.3, MercurialEclipse 1.6.0, CDT 7.0.0 and the JDK 1.6.0_20 from the Ubuntu repositories. The following is also the solution of my problem thread on the users mailing list.

Downloading the Maxine source tree
Maxine is organized into several projects, and in order to have a complete environment you have to check out all the projects. Being a Sun's project, Maxine exploits Mercurial (hg) to manage the source tree. In order to checkout the source tree you have to build a directory to contain all the tree, for instance maxine-hg (I've this convention of placing the name of SCM in the directory name since I use different SCM for different projects). Enter the above directory and clone the repository:

cd maxine-hg
hg clone maxine
The process will start downloading the source tree, this will take a while depending on the network speed. At the end of the process you will find a maxine directory into the maxine-hg one, inside the former you will find a directory for every project as shown below:

$ ls maxine-hg/maxine
Assembler  bin  C1X  CRI        JDWP       Native  TeleJDWP  VMDI
Base       C0X  CPS  Inspector  MaxineC1X  Tele    VM

Now it's time to compile the maxine vm in order to see if the system is ready to host it. In order to compile you need to set the JUNIT4_CP to the jar of Junit4 and JAVA_HOME to the home of the JDK (that on Ubuntu is /usr/lib/jvm/java-6-sun). So do the following:

export JUNIT4_CP=/sviluppo/java/jars/junit4.4jar
export JAVA_HOME=/usr/lib/jvm/java-6-sun
maxine-hg/maxine/bin/max build

If everything goes well, after a couple of minutes you will have the maxine machine compiled. This means that your system is ready to compile it and you are able to build it also in Eclipse. Do a max clean to clean all the compiled stuff and start Eclipse.

Importing the projects into Eclipse
Once the Maxine source tree is ready, it can be imported into Eclipse. The official Maxine/Eclipse HowTo states that you should do a plain import of the maxine container directory (maxine-hg/maxine in the above). This could generate some problems because such kind of import does a copy of the resource you are importing into the workspace. This means, as emphasized by the official guide, that you are working on an isolated source tree and that, after an update of the source tree itself, you should rei-mport the source tree again. But this could cause other problems: if you don't delete the projects removing their disk content too, your import will be incomplete and you'll be stumbled with strange compilation errors.
My suggestion is therefore to connect Eclipse to the Mercurial source tree:
  • select Import from the File menù;
  • instead of importing an existing Java project, import an existing Mercurial project from a local repository;
  • specify the maxine-hg/maxine directory as the root directory of the project, that is the Mercurial local repository. The dialog will show you all the projects to import, check all of them and if you are working with working sets select a working set to which the projects will be added (in my case Maxine-RVM). Click Finish to complete the import process;
  • when the import finishes, you will see the projects listed in the project explorer, and all the projects will be initially opened. Please note that after a while, depending on your computer speed, all the projects will be decorated with Mercurial informations, like the tip;

  •  now you can compile the projects from within Eclipse, but before that you have to specify the JAVA_HOME variable (or at least check it exists) as you did for the command-line compilation. This variable is needed for compiling the native code (Native project). To set it, open the Window, Preferences menu', go to the C/C++ preference page and select the building variables, then add the variable and point it to the JAVA_HOME you used for the command line compilation. The CDT will use gmake to compile the sources, using the JAVA_HOME variable defined here;

  • now you can start the compiling process, using the Project, Build All or Project, Build Working Set menu items. The compilation should complete without errors (but with around 1000 warnings). I had different compilation problems in the first trial:
    • cannot compile the Native project due to a missing jni.h: this means that the JAVA_HOME variable is pointing to a wrong location or that it is not set at all. Double check it!
    • inexistent imports or wrong access to methods/fiels: this means that your sources are not updated to the last version, or that you have done a wrong import over an existing one. Try removing all the projects from Eclipse (remove them also from the workspace) and re-import them again.
    • errors related to @Override annotations: there is something wrong with the project settings. Check if you enabled to report errors instead of warnings about such annotations.
  • once the compilation has been completed, you can generate the boot image. To do so, run the BootImageGenerator-default from the Run menu;

  • the image generator will start compiling the boot image, and you will see a lot of messages scrolling on the console. The production of the image will took around 2/3 minutes;
  •  you are almost done, before launching the VM or the Inspetor you have to check if the $JAVA_HOME directory includes a jdk directory, if not define a symbolic link to fake on the JDK installation layout:
sudo ln -s `pwd` jdk
  • now you can launch the Inspector using, for example, the MaxineInspector-HelloWorld from the Debug menu;
  •  after a while the Inspector window will appear, and the system will stop at a predefined breackpoint. Click on the Resume button of the Inspector GUI to complete the process. After a few seconds the Inspector menu bar will change color passing from green to red. This means that the thread has completed its execution, and scrolling on the Eclipse console you can see the "Hello World" message.

Problems launching the Inspector
During my tests I was unable to launche the Inpsector, having Eclipse to stop at a breackpoint and resulting in the following stack trace and debug information:

FATAL VM ERROR[1]: error in initializeSystemClass
Faulting thread: main[id=1]
------ Stack dump for thread main[id=1] ------
        -> com.sun.max.vm.runtime.FatalError.dumpStackAndThreadLocals(Lcom/sun/max/unsafe/Pointer;Z)V [0x43be38a0+101]
        -> com.sun.max.vm.runtime.FatalError.unexpected(Ljava/lang/String;ZLjava/lang/Throwable;Lcom/sun/max/unsafe/Pointer;)Lcom/sun/max/vm/runtime/FatalError; [0x43b8fce8+412]
        -> com.sun.max.vm.runtime.FatalError.unexpected(Ljava/lang/String;Ljava/lang/Throwable;)Lcom/sun/max/vm/runtime/FatalError; [0x43be8530+54]
        ->$Phase;)V [0x43be81a0+117]
        -> com.sun.max.vm.VMConfiguration.initializeSchemes(Lcom/sun/max/vm/MaxineVM$Phase;)V [0x43b92f78+233]
        -> [0x43b9cd28+94]
        -> [0x43af4ba8+38]
        -> com.sun.max.vm.thread.VmThread.executeRunnable(Lcom/sun/max/vm/thread/VmThread;)V [0x43b936a8+208]
        ->;Lcom/sun/max/unsafe/Pointer;Lcom/sun/max/unsafe/Pointer;)V [0x43af20f8+895]
        -> native{/sviluppo/java/src/maxine-hg/maxine/Native/generated/linux/ (0x7fb0ff2e5000) at thread_run (0x7fb0ff2fd10a+390)}
------ Thread locals for thread main[id=1] ------
                              SAFEPOINT_LATCH: {E} 0xb94118  {D} 0xb94238  {T}
             SAFEPOINTS_ENABLED_THREAD_LOCALS: {E} 0xb94118  {D} 0xb94118  {T} 0xb94118
            SAFEPOINTS_DISABLED_THREAD_LOCALS: {E} 0xb94238  {D} 0xb94238  {T} 0xb94238
           SAFEPOINTS_TRIGGERED_THREAD_LOCALS: {E} 0xb93ff8  {D} 0xb93ff8  {T} 0xb93ff8
                         NATIVE_THREAD_LOCALS: {E} 0xb94358  {D} 0xb94358  {T} 0xb94358
                                 FORWARD_LINK: {E} 0  {D} 0  {T} 0
                                BACKWARD_LINK: {E} 0xb97118  {D} 0xb97118  {T} 0xb97118
                       AT_SAFEPOINT_PROCEDURE: {E} 0  {D} 0  {T} 0
                             EXCEPTION_OBJECT: {E} 0  {D} 0  {T} 0
                                           ID: {E} 0x1  {D} 0x1  {T} 0x1
                                    VM_THREAD: {E} 0x40fa4008 main[id=1]  {D} 0x40fa4008 main[id=1]  {T} 0x40fa4008 main[id=1]
                                      JNI_ENV: {E} 0x7fb0ff508540  {D} 0x7fb0ff508540  {T} 0x7fb0ff508540
                       LAST_JAVA_FRAME_ANCHOR: {E} 0x7fb100021e18  {D} 0  {T} 0
                                MUTATOR_STATE: {E} 0x1  {D} 0  {T} 0
                                       FROZEN: {E} 0  {D} 0  {T} 0
                                  TRAP_NUMBER: {E} 0  {D} 0  {T} 0
                     TRAP_INSTRUCTION_POINTER: {E} 0  {D} 0  {T} 0
                           TRAP_FAULT_ADDRESS: {E} 0  {D} 0  {T} 0
                          TRAP_LATCH_REGISTER: {E} 0  {D} 0  {T} 0
                   HIGHEST_STACK_SLOT_ADDRESS: {E} 0x7fb100023000  {D} 0x7fb100023000  {T} 0x7fb100023000
                    LOWEST_STACK_SLOT_ADDRESS: {E} 0x7fb0fffe4000  {D} 0x7fb0fffe4000  {T} 0x7fb0fffe4000
             LOWEST_ACTIVE_STACK_SLOT_ADDRESS: {E} 0  {D} 0  {T} 0
                          STACK_REFERENCE_MAP: {E} 0xb943a8  {D} 0xb943a8  {T} 0xb943a8
                         STACK_REFERENCE_SIZE: {E} 0x1008  {D} 0x1008  {T} 0x1008
                  IMMORTAL_ALLOCATION_ENABLED: {E} 0  {D} 0  {T} 0
                           INTERPRETED_METHOD: {E} 0  {D} 0  {T} 0
                       NATIVE_CALL_STACK_SIZE: {E} 0  {D} 0  {T} 0
                                     TLAB_TOP: {E} 0x7fb0df448000  {D} 0  {T} 0
                                    TLAB_MARK: {E} 0x7fb0df43f8b0  {D} 0  {T} 0
                                 TLAB_TOP_TMP: {E} 0  {D} 0  {T} 0
                                TLAB_MARK_TMP: {E} 0  {D} 0  {T} 0
                                TLAB_DISABLED: {E} 0  {D} 0  {T} 0
                           TLAB_REFILL_POLICY: {E} 0x7fb0df0d8000  {D} 0  {T} 0
                               ERROR_CONTEXTS: {E} 0x42792b10  {D} 0  {T: {E} 0  {D} 0  {T} 0
                       AT_SAFEPOINT_PROCEDURE: {E} 0  {D} 0  {T} 0

As you can see from the debug information, there is a link error because the library for amd64 cannot be found under the jdk directory. This is because the JDK installed by the Ubuntu repositories as a layout that is not the same of the official Java JDK. The solution, as detailed above, is to fake this layout creating a link to the jdk directory.

sabato 14 agosto 2010

JFK: implementing a raw yield return

A new feature has been added to JFK: a raw support for the yield return.
Before explaining it, please consider the cases when a yield return is useful: you are iterating over an iterator and want to return, one at a time, the next value extracted from the iterator.
Consider the following method:

public int cycle(){
    for( int i : this.myIterable )
        return (i*i);
    return 0;

Now, supposing myIterable is an integer iterable that returns numbers from 0 to 1000, the above method does not cycle on all the values, but returns always the first element in the iterator (so it is 0). The adoption of yield return allows the iterator to return the next value each time the method is called.
At the time of writing, JFK supports yield return when using iterators, and to get this support developers have to:
  1. use a class-level iterator (i.e., not in a local method scope);
  2. annotate the iterator with the @Yield annotation to inform JFK that the adoption of such iterator can be yielded. Since JFK must have access to the iterator, in the case the latter is private, developers can indicate in the annotation a get method for the iterator itself.
  3. ask JFK to build an instance that exploits the yield return.
The important thing to note in the above duties is that no special bareword or special objects must be used by developers: the methods are written as in normal Java. 
So, taking back the above cycle method example, supposing the myIterable
object has been annotatoted with the @Yield annotation, the following code will produce a progressive output:

IYieldBuilder yBuilder = JFK.getYieldBuilder();
// GoodYielder is an objet with the cycle method
GoodYielder gy = (GoodYielder) yBuilder.buildYielder( GoodYielder.class );
for( int i = 0; i < 10; i++ )
     System.out.println("Cycling with yield: " + gy.cycle() );

will produce

        Cycling with yield: 1
        Cycling with yield: 4
        Cycling with yield: 9
        Cycling with yield: 16
        Cycling with yield: 25
        Cycling with yield: 36
        Cycling with yield: 49
        Cycling with yield: 64
        Cycling with yield: 81
        Cycling with yield: 100

The current support to yield return is really minimal and useful for a limited set of use cases. I'm currently investigating a more complete and general approach, keeping into account that no syntactic or semantic overhead will be introduced: the yielding must be declarative!
In the meantime, if you need a more complete yielding mechanism (which is very different in syntax and semantic to JFK aim) have a look at Java Yielder here.

martedì 3 agosto 2010

JFK: Java Methods on Steroids

Warning: if you got here searching for some information about the 35th president of the United States of America, you are in the wrong place! This post (and related ones) have nothing to do with politics or the above president, this is only Java!

Another warning: I have some strong opinions. You can disagree with me
as much as you want, but please keep in mind I'm open only
to constructive discussions.

JFK stands for Java Functional Kernel, and it is a framework that brings function power to the Java World. Java does not use the term "function", preferring the term "method", but the two are, at least for what concerns JFK, the same. A function/method is a code block that you can call passing arguments on the stack and getting back a result.
But while in Java first-class elements are classes, and therefore you cannot abstract a method outside a class, JFK brings the power of function pointers to Java, allowing you to use functions (or methods, if you prefer such term) as first-class entities too.
"Wait a minute, Java does not allow function pointers!" 
That is true, standard Java does not allow them, but JFK does.
"I don't need function pointers in Java, Java is a modern OOP language where function pointers are not useful and, besides, I think they are evil!"
If this is what you are thinking, well, I will not call you idiot because I'm polite, but you probably should not read this; you should go back to your seat and continue typing on your keyboard some standard Java program. 
"An OOP language cannot admit function pointers!"
Ok, now I'm really thinking you are an idiot, so please move away. Being OOP does not mean that function pointers are not allowed, but rather that they must be objects too.
Now if you think this can bring a new way of developing Java applications and can increase your expressiveness, keep reading.

So what do you get with JFK?
JFK enables you to exploit three main features:
  • function pointers
  • closures
  • delegates (in a way similar to C#)
At the time of writing JFK is at version 0.5, that is enough stable to run all the tests and the examples reported here. The project is not yet available as Open Source, but it will be very soon, probably under the GNU GPLv3 license. In the following I will detail everyone of the above features.
Please take into account that JFK is not reflection! Java already has something similar to a function pointer, that is the Method object of the reflection package, but JFK does not use it. In JFK everything is done directly, without reflection in order to speed up execution.

Function pointers
The first question is: why do I need function pointers? Do I really need an abstraction over methods/functions? Well, I think YES!
As well as OSGi provides a mechanism to exports only some packages at run-time, function pointers provide the ability to export functions without having to export objects. To better understand, imagine the following situation: you've got an object (Service Object) that has two service methods, called M1 and M2. Now you've got two processes (whatever a process means to you) that must use only one of the methods, so imagine that the first process must use M1 and the second one M2. Being M1 and M2 defined in the same class, the only solution is to share the service object among the two processes, as shown in the following picture.

This solution is no modular at all, since both processes must keep a reference to a shared object. A better solution, without having to write a wrapper object for every method, is to create a set of interfaces, each one tied to a method. In this way the service object can implement every interface it must expose, and the processes can hold a reference to the interface (and therefore to the service object). The situation is shown in the following figure.

This approach has several drawbacks:
  1. a new interface is needed to modularize every exposed method;
  2. it is still possible to inspect, thru reflection, the reference and understand which object it is "hiding".
The (1) requires developers to write a lot of code, and this is what happens with normal event handlers, such as ActionListener: you have to declare a single-method interface for every method you want to expose. The (2) is a security hole: with reflection the reference holder can inspect the object and understand it has the method M2 and even call it.
So, while very OOP, this approach has limitations that can be overtaken with the adoption of function pointers.
With function pointers it is possible to expose only pointers to a method M1 or to M2 without having to expose the object (or its interfaces) to the consumer processes, as shown in the following picture.

This is a very modular way of doing things: you don't have to worry about the service object, introspection against it, or even where the object is stored/held: you are passing away only the pointer to one of the functions and the receiver process will be able to exploit only the method/function behind such pointer.
"Ok, I can get this for free with java.lang.reflect.Method objects"
Again, this is not reflection! Reflection is slow. Reflection is limited. This is a direct method call thru a pointer object! And no, this is not a fancy use of proxies/interceptors! Note that with reflection you are able to invoke only local methods, while with JFK you are free to call even remote functions without having to deal with RMI objects and stubs. At the moment this feature is not implemented yet, but it is possible. Please keep into account that, at the time of writing, JFK is still a proof of concept, so not all possible features have been implemented!
Keeping an eye on security, JFK does not allow a program to get a function pointer to any method available in a class/object, a method must be explicitly exported, that is when you define a class and its methods, you have to explicitly mark the methods you want to be able to be pointed. This allows you to define with a fine grain what services (as functions) each calss must export. Exporting a method is really simple, you have to mark it with the @Function annotation indicating the method name, that is an identifier that is used to refer to that specific method (it can be the same as the method name or something with a different meaning, like 'Service1'). Let's see an example:

public class DummyClass {

    public final String resultString = "Hello JFK!";

    public String aMethodThatReturnsAString(){
      return resultString;
    @Function( name = "double" )
    public Double doubleValue( Double value ){
       return new Double( value.doubleValue() * 2 );
    @Function( name = "string" )
    public String composeString( Integer value ){
       return resultString + value.intValue();
    @Function( name = "string2" )
    public String composeStringInteger(String s, Integer value ){
       return s + value.toString();

The above class exports three instance methods, with different identifiers. For instance the method composeStringInteger is exposed with the identifier of 'string2'. This can be used from a program in the following way:

    // get a pointer to another function
    // the "double" function returns a computation of a double 
    // passed on the stack (dummy is a DummyClass instance)
    IFunction function = builder.bindFunction(dummy, "double" );
    dummy = null;            // note that the dummy object is no more used!!!
    Double d1 = new Double(10.5);
    Double d2 = (Double) function.executeCall( new Object[]{ d1 } );
    System.out.println("Computation of the double function returned " + d2);
    // it prints
    // Computation of the double function returned 21.0

As you can see, you can obtain a function pointer to an object method that is marked as 'double', and then executes the function with the IFunction.executeCall(..). That's so easy!
So to recap, you can get an IFunction object bound to a method identified by an exposing name, and you can execute the executeCall method on such IFunction in order to execute the function pointed. I stress it again: this is not reflection! Moreover, IFunction is an interface without an implementation, that means there is nothing static here, all the code is dynamically generated at run-time.
Being dynamic does not mean that there are not checks and constraints: before invoking the function, the system checks the arguments number, the argument type and so on and throws appropriate exceptions (e.g., BadArityException). 
Now, inspecting the stack trace of the IFunction.executeCall(..) you will never see a Method.invoke(..) or stuff like that (do you remember that this is not reflection?). 
Performances are really boosted with JFK when compared to reflection. For instance, the method call of the 'double' function requires around 12850 nanoseconds with JFK, while it requires 324133 nanoseconds using reflection (in particular 292914 ns to find the method and 31219 ns to invoke it). So a simple method execution goes 25 times faster, and even more: since IFunction objects are cached, once they are bound to a function, the execution of the pointer is almost immediate!

Closures are pieces of anonymous code that can be executed as first-class entities. To say in simple words, closures are like Java methods that can be defined on the fly and that are not belonging to any special class.
Java does not support closures, but something similar can be obtained with inner anonymous classes. Having function pointers, closures come almost for free, so that in you code you can do something like the following:

   IClosureBuilder closureBuilder = JFK.getClosureBuilder();
    IFunction closure = closureBuilder.buildClosure("public String 
             concat(String s,  Integer i){ return s + i.intValue(); }");
    // now use the closure, please note that there 
    // is no object/class created here!
    String closureResult = (String) closure.executeCall( new Object[]{ 
                                     "Hello JFK!", new Integer(1234) } );
    System.out.println("Closure result: " + closureResult);
    // it prints
    // Closure result: Hello JFK!1234

Closures are defined as IClosure objects, that are a special case of IFunction objects. While IFunction objects point to an exisisting method, IClosure objects point to a method that is still not existing and that is not exposed thru any class/object. Again, there is no reflection here, and there is no static implementation of IClosure available. Execution times are on the same order of IFunction ones, but closures are not cached in any way, since they are thought to be one-shot execution unit. I haven't inspected if my approach is the same as of Groovy closures, I suspect there is something similar here.

Delegates are something introduced by C# to allow an easy way to simulate function pointers for event handling. JFK provides a declaritive way of defining delegates and their association that is somewhat similar to the signal-slot mechanism of Qt. First of all a little of terminology:
  • a delegate is the implementation of a behaviour (this is similar to a slot in the Qt terminology)
  • a delegatable is an object and/or a method that can be bound to a delegate, so to a concret implementation (this is similar to a signal in the Qt terminology)
So for instance, with the well known example of the ActionEvent it is possible to say that the method actionPerformed(..) is a delegatable, while the implementation of the actionPerformed(..) is the delegate.
The idea that leads JFK delegates has been the following:
  1. delegatable methods could be abstract (the implementation does not matter when the delegate is declared)
  2. adding and removing a delegate instance should be dynamic and should not burden the delegatable instance
Let's see each point with an event based example; consider the following event generator class:

public abstract class EventGenerator implements IDelegatable{

    public void doEvent(){
    for( int i = 0; i < 10; i++ )
        this.notifyEvent( "Event " + i );
    @Delegate( name="event", allowMultiple = true )
    public abstract void notifyEvent(String event);

The above EventGenerator class is a skeleton for an event provider, such as a button, a text field, or something else. The idea is when the doEvent() method is executed an event is dispatched thru the notifyEvent(..) method. As you can see the (1) states that the notifyEvent(..) method can be abstract, as it is in this example. The idea is that, since the notifyEvent(..) should have the implementation done by someone else (the event consumer), its implementation does not matter here. Letting the delegatable method abstract means that you cannot instantiate the object EventGenerator without having bound it to a method implementation. If you need to be able to instantiate it, you can provide a body method (even empty) keeping in mind that it will be replaced by a connection with the delegate that must execute the method. The delegatable method must be annotated with the @Delegate annotation, where you can specify a name (that is similar in aim to the function name) and a flag that states if the delegatable can be connected to multiple delegates. And here comes the (2): note how the delegatable method is called once. Even when multiple delegates are connected to the delegatable the JFK kernel takes care of the execution of all the delegates. In standard Java you have to write:

public void doEvent(){
    for( int i = 0; i < 10; i++ )
        for( MyEventListener l : this.listeners )
             l.notifyEvent( "Event " + i );

having listeners and MyEventListener respectively a list of listeners and the interface associated to the listener. Can you see the extra loop to notify all the listeners? It means that the producer has to keep track of all the consumer, and this is wrong! It strictly couples the consumer to all the producers, and this is an awkward implementation. In fact, I think that the event mechanism as implemented by standard Java is not an uncoupling mechanism, and this is why I tend to prefer AspectJ/AOP event notifications (as implemented in WhiteCat). Again, there is no reflection here, and the kernel is not keeping track of all the consumers, rather it manages a set of function pointers to consumers. It's that easy!
Now let's see how you can use the delegates; first you have to implement a behaviour for the delegate, assume we have the following two:

public class EventConsumer implements IDelegate{

    @Connect( name="event" )
    public void consumeEvent(String event){
    System.out.println("\n\t********** Cosuming event " 
                    + event + "\n\n\n");

public class EventConsumer2 implements IDelegate {

    @Connect( name="event" )
    public void consumeEvent2( String e ){
                               Cosuming event " + e + "\n\n\n");

Both the event consumer have a method with the same signature of the delegatable one; the JFK kernel checks before binding methods that the signature are compatible. Both methods are annotated with the @Connect annotation that specifies the same name of the delegatable to which connect to. Now you can write a program that does the following:

    IDelegateManager manager = JFK.getDelegateManager();
    IDelegatable consumer = (IDelegatable) manager.createAndBind(  
                 EventGenerator.class, new EventConsumer() );
    // now the delegate will invoke the abstract method, 
    // that has been defined at run-time to match the
    // consumer method in the EventConsumer object
    ((EventGenerator) consumer).doEvent();
    // it prints
    // ********** Cosuming event Event 0
    // ....
    // ********** Cosuming event Event 9  
    // now it is possible to add another consumer to the event generator, 
    // since it allows a multiple
    // connection. To do this, we can add another delegate to the instance
    manager.addDelegate(consumer, new EventConsumer2() );
    // now the delegate will invoke the abstract method, 
    // that has been defined at run-time to match the
    // consumer method in the EventConsumer object
    ((EventGenerator) consumer).doEvent();
    // it prints
    // ********** Cosuming event Event 0            <- from EventConsumer
    // **********>>>>>>> Cosuming event Event 0     <- from EventConsumer2
    // ....
    // ********** Cosuming event Event 9            <- from EventConsumer
    // **********>>>>>>> Cosuming event Event 9     <- from EventConsumer2

First of all you need a delegate manager to which you ask to instantiate an EventGenerator object (you cannot instantiate it directly in this example because it has abstract methods) binding it to an EventConsumer instance. This produces a new instance of EventGenerator that will call and execute EventConsumer.consumeEvent(..) each time EventGenerator.notifyEvent(..) is called. In the following, you can dynamically add (and remove) other delegates to the running instance of EventGenerator so that all the associated delegates will be executed when the EventGenerator.notifyEvent(..) method is called. I stress it again: note that the event generator do not deal with all the possible event consumers, the JFK kernel does it! And again, there is no reflection involved here, everything happens as a direct method call!
The following picture illustrates how you can imagine the delegate works in JFK:

What else can I do with JFK?
Well, JFK is a functional kernel, so you can do whatever you do with functions. For instance you can pass a function pointer to another method/function, enabling functional programming!

A final note on reflection
In this article I wrote several times that JFK does not uses reflection. This is not true at all. As you probably noted the current implementation is based on annotations, and this means that in order to get annotations and their values, JFK needs reflection. The thing that must be clear is that method execution thru function pointers does not use reflection at all!

(Some) Implementation Details
I'm not going to show all the internals, for now it suffice to know that this project is developed in standard Java (J2SE6) and, strangely, it is still not an AspectJ project as I do for almost every project I run. All the configuration of the run-time system is done using Spring, and there is a suite test (Junit 4) that stress the system and its functionality.

Need more info?
Well, at the moment this is a private research project of mines, so I cannot show you all the details because they are still changing (but the API is stable). I've created a page on the Open Source University Meetup where discussions can happen, beside my blog. If you need more info, or want to collaborate at the project, feel free to contact me.

lunedì 2 agosto 2010

WiteCat now has a page on OSUM

I'm not sure it makes sense to have a page on OSUM, since the situation of what was Sun's is not clear to me, but I've created a page to make the WhiteCat project more visible. You can find the page here.
However, please take into account that the main information about the project will be available, of course, thru my blog.
I've used the same logo as it was adopted in the CTS 2010 poster.