martedì 27 ottobre 2015

Il modo corretto di uscire da Vi(m)

C'è una maglietta promozionale per Vim che riporta un comando errato, che trovo molto spesso anche nei manuali all'editor dedicati.


Spesso si trova, come metodo di uscita "sicura" dall'editor di fornire il comando ':wq!', che significa 'scrittura' (w) 'forzata' (!) ed 'uscita' (q); in altre parole si salva il file corrente e si esce.
Qual'è il problema? Che la scrittura forzata aggiorna i metadati del file anche quando questo non è stato modificato, mandando quindi in confusione programmi che digeriscono questi metadati (es. make). Il metodo corretto per uscire dall'editor è usare 'ZZ', che si preoccupa di salvare i file non salvati che sono stati modificati, lasciando inalterati gli altri.

lunedì 26 ottobre 2015

ITPUGLab @ PGDay.IT 2015

I had the opportunity and pleasure to play an active role in the third ITPUGLab, a well established tradition and a successful event me and my friend Gianluca proposed a few years ago.
And I have to say: it was really fun and educative.

What is the ITPUGLab? In short: it is an Open Space container entirely focused on PostgreSQL.
Attendees meet for exchanging, proposing or requesting ideas, thoughts, approaches and experiences getting 'hands-on' in a LAN environment and building a constructive shared experience on their laptops, or even philosophical discussions of any kind all being user-experience centric and related to PostgreSQl. No matter what the participants' skill level is.
There are no predefined contents: attendees come and propose or join others' proposals.
The evolution of the shared interactive contributions is what leads to discovering a path (not necessarily the right one) and get to a possible goal.
This translates to human-networking with a  PostgreSQL-social approach, allowing attendees to get acquainted in ways one cannot predict.

This year we had two and half hours dedicated to the lab, a very comfortable room and very nice people attending.

The following is the list of topics discussed end experienced:
  • installation on Microsoft Windows, where the users challenged the differences on installing PostgreSQL on a Unix-unlike machine, coming to the goal of providing a running instance to other people in the room;
  • migration and upgrade, with particular interest to the migration of a quite old cluster from a MS Windows machine to a mature and up-to-date cluster on a *nix machine, as well how to do it automatically and error-safely;
  • install, configure and use the PostGIS extension from scratch;
  • pl/pgsql scripting, with particular focus on editors, repos and best practices;
  • data integrity check and validation with regard to the database and/or application;
  • periodical data dump and load from one server to one (or many) others, with regard to various scenarios and possible automations.

Rules in the ITPUGLab are simple: after introducing themselves, participants start grouping spontaneously, warm up and get discussing, hands-on. Everybody can join a formed OpenSpace as well as leave it or, even, the room. When it's over, it's over: once the time elapsed pencil are down, and what happened is always the only and rightmost thing could happened.
Pictures cannot provide the excitement and fun filling the room.






As said, this is the third edition of the ITPUGLab, and quite frankly I'm proud of the continuous success it is getting within the PGDay.IT annual conference.
One thing all the three edition did have in common is the same request by attendees for more time: we are evaluating how to extend the session in the next PGDay.IT.
If you are coming to the next PGDay.IT, get into the lab: it's an experience you really don't want to miss!

sabato 24 ottobre 2015

PGDay.IT 2015: nine editions and counting!

pgday_200x60_it

We made it!
ITPUG (Italian PostgreSQL Users' Group) organized the ninth edition of the Italian PGDay, namely PGDay.IT.
We have a very strong a quite long tradition in organizing this national conference, and as in previous editions, we had a successful conference even this year.
The location, the Camera di Commercio di Prato, was simply great: a modern and really beautiful context to host the two tracks and the third edition of the ITPUGLab, the Open Space container entirely dedicated to PostgreSQL.
The keynote speech was performed by the well known community member Andres Freund, but he was not the only member of the international community.
After the keynote and the usual coffee break, with many delicious Italian pastries, the conference split in two parallel tracks where a set of very competent and efficient speakers  presented new projects, ideas, features and core implementations of our favorite database.
In the afternoon another track added to the already mentioned two giving the possibility to attendees to participate to the ITPUGLab, another well established tradition of the PGDay.IT.
Last but not least, the usual session of lightning talks, the group picture and the very good beer offered by one of the conference sponsor.

At the end we can count one hundred attendees, ten regular talks, sixteen speakers, six sponsors and two social beers.

It is quite difficult to recap in a few lines what this event was and has been in the past edition. I can only say that if you are missing this conference you are missing a very technical event within a friendly environment and fun context

giovedì 8 ottobre 2015

PGDay.IT 2015: we are here!

The ninth edition of the Italian PGDay (PGDay.IT 2015) is really close, and ITPUG is proud to announce that the schedule is available on-line.
As in the previous editions we have a rich set of talks and contributions, as well as the third edition of the ITPUG's own Open Space named ITPUG-Lab.

Check out the official website at http://2015.pgday.it and see you soon at PGDay.IT 2015!

lunedì 28 settembre 2015

PGDay.IT 2015: online i talk accettati

Ormai il PGDay.IT 2015 è vicinissimo, e la lista dei talk accettati è disponibile sul sito ufficiale dell'evento.
Il keynote speaker di questa edizione è affidato ad Andres Freund, un nome piuttosto noto nella comunità internazionale grazie al suo lavoro sul sistema di replication.

Io avrò l'onore e il piacere, per la seconda volta (terza nella storia di ITPUG), di partecipare all'ITPUG-Lab, un open space dedicato a PostgreSQL, grande motivo di aggregazione e fonte di ottimo successo nelle precedenti edizioni.

sabato 12 settembre 2015

Ubuntu do-releease-upgrade: worst update manager ever (?)

Ok, I'm not the right person to judge the good quality efforts behind the Ubunt/Kubuntu update manager, but I have to say that I was never able to do a clean upgrade without any problems. The last time I upgraded my whole system I spent almost a week trying to get 3D video card acceleration back, as well as tuning again my Perl library and RT.
Sounds like it is not a well-automated upgrading...at least in my experience.

Vendere sul web...e incontrare la gente arrogante!

Quando metto in vendita qualcosa sul Web cerco sempre di essere molto chiaro: allego svariate foto dell'oggetto, ne descrivo dimensioni, peso, stato di conservazione, motivo della vendita e se sono disponibile ad una spedizione.
Immagino che se fossi io un potenziale acquirente, prima di perdere il mio tempo, vorrei avere quante piu' informazioni a riguardo, e così penso di fare annunci che sono sempre piuttosto completi.
Nonostante questo, e nonostante il prezzo che espongo, mi capita di avere a che fare sempre con gente presuntuosa che pensa che io voglia aprire una trattativa ad ogni costo; sbagliato! Io non metto annunci per svuotare il mio garage, metto annunci su roba che non uso piu' ma che posso ancora conservare dove si trova!

Faccio un esempio pratico: un oggetto esposto a 70 euro, del peso di circa 14 kg, ed ecco cosa mi viene proposto:


ti contatto per l'annuncio [..omissis..].
50 € compresa spedizione?


Ma vai a quel paese!
Ho scritto da qualche parte che sono disposto a trattare? NO!
Ho scritto da qualche parte che sono disposto a spedirlo? NO!
Ho scritto da qualche parte che me ne devo sbarazzare a tutti i costi perché sto emigrando in un altro paese? NO!
Anche ammesso che io possa accettare il prezzo ribassato, fai due conti sul costo della spedizione e capisci che mi stai proponendo di regalartelo!

Ma sono comunque una persona ragionevole, e quindi decido di dare una chance a questo "facilone":


Ciao,
scusa ma capisci anche tu che spedire un oggetto del genere al prezzo
che suggerisci non è un gran affare, dal mio punto di vista. Ora, puoi
motivarmi la tua offerta? Perché se ritengo valida la motivazione te
lo spedisco.


Ovviamente non arriva nessuna motivazione, come pure nessun rialzo di prezzo (se è una trattativa dobbiamo essere flessibili da ambo le parti, o sbaglio?).

Cari potenziali acquirenti, io non mi permetterei mai di svalutare i vostri oggetti, e se fossi intenzionato a comprarli vi chiederei prima se è possibile trattare sul prezzo. Io non sono al mercato, e fortunatamente non sono alla canna del gas, quindi non rompete i coglioni e non mancate di rispetto a chi cerca di fornirvi quanti piu' elementi possibili per una sana valutazione della merce.


sabato 15 agosto 2015

PGDay.IT 2015 - stiamo arrivando!

E anche questa volta stiamo organizzando la giornata nazionale dedicata a PostgreSQL, il PGDay.IT 2015



Sono molto contento di poter far parte dell'organizzazione, anche perché la scorsa edizione sono dovuto rimanere assente causa problemi personali.
Tante le novità di questa edizione, a partire dalla sede che si sposta (di poco) in una nuova location molto moderna e che promette molto bene: la Camera di Commercio di Prato.
Nel frattempo abbiamo pubblicato anche i banner per farci un po' di pubblicità, abbiamo già due sponsor di tutto rispetto e diverse proposte di talk sono state inviate.
Non resta che rimanere sintonizzati e partecipare all'evento!

venerdì 29 maggio 2015

Me, myself and Dataflex

I hate Dataflex http://en.wikipedia.org/wiki/DataFlex with a passion!
Well, when I say that I hate Dataflex I should say I hate the so called "console" mode Dataflex, the only one I've ever worked on. And I have to also say that part of my hate is due to a wrong training, or better, no training at all.

How did I ever meet Dataflex (df31d)? Well, you know, the rent is a good motivation to work with tools you don't like very much.
Since the beginning the language itself appeared awkward to me. Coming from some real languages (C, Perl, Java), I felt not at home with a language case-insensitive.
The total lack of braces and the return to the BEGIN-END syntax was quite a shock.
No multi-line comments.
A compiler that crashed each time you had a line longer than 256 characters...and no, I'm not joking! I don't remember how many hours I spent trying to understand why a program was not compiling at all, without any error message, to just discover that somewhere I had a quite complex IF clause (indented) that has exceed the right margin. And I have to say that I often laugth thinking at this stupid bug, probably implemented in a way I only have seen in didactic examples of C such as:

#define MAX_LINE_SIZE 256
...
char current_line[ MAX_LINE_SIZE ];

However, I started writing my programs, and as usual with a new language, my first developments were baby steps in the Dataflex world. My programs were simple, written in a simple and well documented way, so that they looked even more stupid to me.

There was not an IDE to develop in Dataflex, so I fired up my Emacs to the aim. But it was not a simple task, since Dataflex was displaying masks on the screen using the DOS character set, that at that time was not shipped with Emacs. I had therefore to compile the appropriate encoding, add to Emacs, and configure the editor to load such encoding for every Dataflex file (.frm). At that time I was at the very basic of Emacs, and so it was quite an hard job to me.

As I said, I was coming from some experience with real languages, and if you can pass over the syntax and the buggy compiler, you cannot live without methods. Well, my Dataflex was without methods. I had two choice: define a "routine", invoked via a far jump (GOSUB) or use labels to far jump to other pieces of the programs (GOTO).


The operator set was...tiny. Moreover, many operators were verbatim, so that comparisons use GT,GE,LE,LT and so on.
Assignment was performed via a MOVE...TO command. If my memory serves me well, the only "smart" arithmetic operators were INCREMENT and DECREMENT.

To complete the nightmare, I did not have any kind of good documentation (and I was not able to find out any on the Web).

But you have the opportunity to define "macros", in the C language sense. Ok, this sounds good, until you clash some variable or loop name.

Last but not least, the compiler was reporting errors at lines with macro expanded. In other words, while the compiler was reporting an error on line X, your error could be a lot before due to some macro expansion.

After a while I was working with all this mess, I found the special DEBUG command. The purpose was to print out to the screen at which line the program was executing. But it was not very helpful, since it was just printing out a number (like 123) on the screen exactly where the cursor was, so filling your application of digits making me feeling I was looking at the Matrix screen.

Next I discovered the -v switch on the compiler, and I found it could be increased at least to -vvv to get more verbose messages. Or better, the messages were obscure as usual, but the processed file (with macro expanded) was printed on the screen, so that you can find out the line number with more accuracy.

Then came methods. Yeah! A great day, one that made me feel a bit more at home.
Well, methods in Dataflex are not what you would expect from other languages. The prototype is extremely verbose, the invocation reminds me to lisp since you have to put the method name in parentheses (as well as arguments):

(foo(1, 2, 3))

But hey, at least you have some real reusable code without the name clashing of a macro and with a return value!

The special key handling was a pure mess! Dataflex used subroutines to handle events generated by special keys, with some confusion on what, when, and how to resume the control flow.

The database structure was...not a database structure, at least in my opinion.
If my memory serves me well, you had to define a new archive (kind of ISAM) using a specific program, that ensured that the data file and the indexes (also separated files) where in place. The fresh archive was then added to the so called filelist, that was in charge of listing all available databases (it was a kind of schema to a RDBMS). Modifying an archive (e.g., adding a field) was of course a locking operation, so you had to schedule for maintanance. And being the filelist limited in size, you had a limited number of tables/archives in your deployment.

One way to overtake the number of archive limitations was to play with user paths: as happens with the concept of search schema in RDBMS, an user can have several copy of the same archive, with the same binary structure and different content, in different disk positions. Pointing to one or another would do the trick.
We used this in particular to scatter a few utility archives among users, so that every one could have its own copy.

Relying on the file system, data corruption was dumpered by the operating system and its own file system. Using Linux, luckily, there were no many corruptions, but ship happens, and so you had to run a specific tool to reindex the whole archive. Of course, this was another full-locking operation.

In general, the speed of data retrieval was good, but the approach was that of single record (opposed to the one of whole set), and therefore all programs contained long and nested loops to extract the information you need. The relational part of the query (e.g., join) was all in charge of the developer, and therefore missing a single attribute could destroy all your retrieve logic in a subtle way.

Ok, so there were BEGIN-END, loops, GOTO/GOSUB and locking operations...but the system workded. And it worked up to a few gigabytes of data, therefore I have to say I was quite impressed about.

Of course, you did not have the flexibility of SQL, and you did not have even a way to specify a query that was not "pre-built". Allow me to elaborate a bit more: as I said, you had to define indexes for every archive. An index defines how the archive can be read, that is in which order you can loop thru the records. What if you want to retrieve the record in another order? You have to either define a new index (but you are limited by number of indexes and locking operations) or to use existing ones in an esotheric way, making your loops even more complex and your program a lot less readable.

Last but no least: an index was not only an access method, but a way to define an unique constraint. Therefore, you were locked to only a few indexes for every archive, and the rest was a huge REPL.
There was also the catch-all looping mechanism: the sequential scan of the whole archive (also known as BY RECNUM).

Adding records was quite simple, a special instruction SAVERECORD was there for the aim.
Modifying a record was a little more complex, since you had to lock via the REREAD command, modify fields and then issue a SAVERECORD followed by an UNLOCK.

Perl came to a rescue.
At that time I was mainly using shell scripting, but here I needed something a little more complex to handle all the mess left around by Dataflex. For instance, I used a Perl script to convert and mangle output text before sending it to the printer. Dataflex was absolutely not good at handling text!
I also used Perl to control how the users jumped into the Dataflex runtime, and this allowed us to ease the management of the sessions when locking operations were absolutely necessary.
Finally, I used Perl to mangle some Dataflex source code in order to avoid some boring stuff.

I have to say that, in order to automate some looping, Dataflex provided WINDOWINDEX and FIELDINDEX, two special (global) varaibles to iterate over UI and database fields. Please note that the above variables were global, so a wrong initialization could make you fly to the wrong record or field!

Now, after all this mess, I have to say that I'm aware of a lot of good uses of Dataflex, that has also a kind of OOP interface. As I told, I had to work on a lot of legacy code, and without documentation and appropriate training, it was quite impossible for me to use "advanced" features.
As final word, please note that Dataflex was a quite old language, therefore it is obvious that when compared to modern languages it looks scaring and awkward.

But sometimes I still have nightmares about Dataflex!

martedì 26 maggio 2015

Banale come farsi una fototessera

Farsi una fototessera è una operazione abbastanza comune, banale, priva quasi di significato.
Eccetto se è la tua prima volta, come è stato per me oggi.
Non proprio la prima volta che mi sono fatto fare una fototessera, ma la prima volta che me l'ha fatta uno sconosciuto.
La prima volta nella quale ho fatto caso alla professionalità alla quale ero inconsapevolmente abituato: i fondali sempre in ordine, i flash sempre al loro posto, la macchina sempre pronta e in posizione e il monitor in cui guardare il risultato. Perfino il portafoto con logo e nome.
Cose banali, dettagli quasi insignificanti che aumentano ancora di piu' il senso di distacco da chi quella fototessera la scattava.
Già, per la prima volta da quando sono nato, ho cambiato fotografo.
E oltre agli aspetti professionali, non posso che dire che quello di prima era di gran lunga il migliore che abbia mai visto all'opera.
Banale, ma sincero.

domenica 24 maggio 2015

FreeBSD and lagg at boot

I was experimenting with my FreeBSD and lagg(4), the link aggregator.
The idea is to perform a network interface teaming using more than one at the same time with a specific protocol, for instance, to provide high availability and redundancy.
So I fired up my VirtualBox instance with the more NICs I could and did some aggregation. Nothing special so far.
When I was satisfied, I tried to put the same configuration in /etc/rc.conf to enable it on boot, but here comes the problem: I was unable to create the lagg interface and, consequently, to aggregate NICs.
It tooks me a while, but finally got the hint from the rc.conf(5) man page: cloned_devices! This variable contains a list of interfaces that are clonable, and therefore created on boot.
Since it was not so simple to find out how to setup the rc.conf, the following is mine in the hope it can help anyone.


############################
## Link Aggregation Setup ##
############################
#
# First of all bring up the interface,
# then create the lagg0 interface using the
# cloned_interfaces (which in rc.conf means
# to create the interface) and then
# with a single command bring up the lagg0, specify the
# protocol to use and add each interface (laggport).
#
#
ifconfig_em1="up"
ifconfig_em2="up"
ifconfig_em3="up"
cloned_interfaces="lagg0"
ifconfig_lagg0="up laggproto roundrobin laggport em1 laggport em2 laggport em3 192.168.200.222 netmask 255.255.255.0"

sabato 23 maggio 2015

CPAN PR Challenge: May done!

In order to recover from the April failure, I decided to do my May assignment as quick as possible.
And just a few days after I was on Mouse I placed a Pull Request:
As you can see I'm still doing more a clean-up refactoring, and this allows me to improve my Perl skills and learn about the module I'm working on. By the way, I tried also to fix up an official issue.

Qt compie 20 anni

Il mondo sta festeggiando 20 anni di Qt.
Come vola il tempo.
Per chi non lo sapesse Qt è un framework grafico che si è molto esteso nel tempo. E' scritto in C++, originariamente da TrollTech, una ditta norvegese. L'utilizzo forse piu' ampio e conosciuto di Qt è nel desktop grafico KDE.

Personalmente ho incontrato Qt per caso, nel 1999. O meglio, ero già al corrente di Qt, ma non avevo mai guardato sotto al cofano. Ebbene nel 1999 ho trovato un articolo su Linux Magazine che spiegava come creare una piccola applicazione grafica con le librerie KDE, che in effetti ricordano molto quelle di Qt sulle quali si appoggiano.
Da lì ho iniziato a esplorare l'universo Qt, facendo arrivare anche il Tucano Book (http://shop.oreilly.com/product/9781565925885.do). Ahimé, nonostante l'interesse costante per Qt, non ho mai avuto l'opportunità di usarlo in un progetto di produzione, ma solo per piccoli progetti personali.

Qt mi ha insegnato molte cose.
Anzitutto la documentazione è eccellente (specialmente se comparata a quella delle KDELibs, che a mio parere è lacunosa).
L'API è straordinariamente semplice e coerente, cosa che non ho trovato in nessun altro framework analogo.
La gestione della persistenza database è semplicemente disarmante: nessun altro framework/ORM riesce a dare gli stessi risultati in termini di codice.
Il codice risultante è sempre compatto, chiaro, e conciso.
Il concetto di signal/slot è apparentemente piu' complesso rispetto a quello di eventi/handler di altri framework, ma alla fine risulta di gran lunga piu' elengate.

Qt è definitivamente per me l'esempio di come dovrebbe essere una libreria.

venerdì 22 maggio 2015

Improvements to a little Perl script to find out duplicated files

I made a few changes to my little Perl script that finds duplicated files on a set of directories.
I always wanted to use File::Find::Rule, and this has been an appropriate context for plugging the module in so that the code now looks as follows:

die "\nPlease specify one or more directories\n" unless ( @ARGV );

my $files = {};
File::Find::Rule->file()
->nonempty()
->exec( sub{ push @{ $files->{ digest_file( $_[2], "SHA-1" ) } }, $_[2]; } )
->in( @ARGV );

while ( my ($sha1, $files) = each %$files ){
say "\n\n#Duplicated files: \n\t#rm " . join( "\n\t#rm ", @$files ) if ( @$files >= 1 );
}

As you can see I've also removed a few tests for the arguments, since I rely on the in() method to check that every entry in @ARGV is a directory.

CPAN PR Challenge: April skipped!

I totally missed the April assignment.
I was assigned to Net::Pcap, that was definetively too much complex for my skills.
And besides, I did not have even time to study it in deep.
Sorry guys.

lunedì 6 aprile 2015

CPAN Pull Request Challenge: March done!

Even if my March PR was not very complicated, it was succesfully merged on time for my issue.
I have to admit that, due to personal duties, I was not able to contribute very much to the last assignment. So I applied the Perl way of doing it to the existing code resulting in removing a few unnecessary statements and conditions to the module.
Enough to keep on contributing to very good code and stay tuned in the Perl world.

Benvenuto nuovo Planet PostgreSQL it !

Come probabilmente ci si è già accorti, il planet PostgreSQL italiano ha cambiato look.
Non si tratta solo di una revisione estetica, bensì di una necessaria migrazione dell'intera infrastruttura. So è colta anche l'occasione per modificare l'implementazione, passando da un planet-planet ad un rawdog.
Un ringraziamento a Gianluca per l'impegno e la professionalità.
E buona scrittura!

domenica 22 marzo 2015

Lego

A volte fa piacere ripescare le vecchie pubblicità dei giochi che ci hanno fatto crescere. E che continuano a far crescere i nostri figli (certo, ora sono anche robotizzati e ci sono libri appositi...ma il concetto non cambia!)


sabato 21 marzo 2015

mercoledì 18 marzo 2015

Kubuntu upgrade...

Quando programmo un upgrade di sistema con Kubuntu so gia' in partenza che devo aspettarmi qualche casino.
Non so se sia solo un problema mio, diciamo che non ho mai speso molto tempo per comprendere bene il sistema di upgrade Debian, sta di fatto che nel caso di Kubuntu ho sempre avuto dei problemi.
Che si tratti di una distribuzione LTS o meno, al termine dell'upgrade ci sono parecchie cose che non funzionano.
Spesso si tratta del desktop che non parte piu'.
A volte la stampante non funziona.

Ma con l'ultimo aggiornamento è stato peggio.
Apparentemente Eclipse non proponeva piu' l'autocompletamento con CTRL-space, ma subito non gli ho dato importanza.
Molto piu' grave: Perl non funzionava piu'.
Quello che ho scoperto è che Perl è stato aggiornato dalla versione 5.14.2 alla 5.18.2, peccato che tutte le librerie installate con CPAN fossero in una cartella specifica /usr/local/share/perl/5.14.2 e che ora Perl avesse @INC impostato alla corrispondente 5.18.2, vuota!
Così ho "invertito" le due cartelle, cosa che mi è sembrata la soluzione piu' rapida e ho lanciato un upgrade completo dalla shell CPAN.
Nel mentre che attendevo ho scoperto che la combinazione CTRL-space non funzionava anche in Emacs, orrore!
Ebbene colpa questa volta del selettore della tastiera, che appunto con tale combinazione modificava il layout e la lingua di input. Ecco quindi che disabilitando questa (per me) inutile funzione gli editor hanno ripreso a funzionare correttamente.

Fare il caffé con Emacs

Nell'eterna lotta fra gli editor di testo (o gli ambienti multifunzione) Emacs vince a mani basse perché....sa fare anche il caffé.
Ovviamente non è possibile pilotare qualunque macchinetta: occorre che la macchina del caffé sia collegata in rete e che supporti il protocollo HTCPCP (Hyper Text Coffee Pot Control Protocol).

members @ ITPUG

Oggi è stato pubblicato sul sito di ITPUG un nuovo progetto, o meglio una nuova applicazione: members
L'applicazione, sviluppata da Gianluca (uno dei consiglieri) permette l'iscrizione online all'associazione e, lato back-office, la gestione del database dei soci stesso (libro soci).
L'applicazione è raggiungibile dal sito web ufficiale di ITPUG attraverso i link per l'iscrizione online.

Ritengo il progetto molto positivo, anzitutto perché consentirà (e semplificherà) la gestione dei soci in modo piu' agevole rispetto allo stato attuale, e anche perché Gianluca ha deciso di coinvolgere i soci con suggerimenti e osservazioni (nonché anche per la parte grafica).

L'applicazione è, come ragionevole per un primo rilascio, ancora in via di sviluppo e miglioramento, e sono sicuro che già nei prossimi giorni si assisterà a dei cambiamenti.

Lato tecnico, inutile da dire, si utilizza un database PostgreSQL...

martedì 17 marzo 2015

FreeBSD VIMAGE

I never noted VIMAGE, the kernel virtualization project of FreeBSD  until I read that PCBSD was making changes in order to use VIMAGE on jails.
The idea behing VIMAGE, also called VNET, is to provide a self-contained-per-jail-status in order to virtualize modules and their behavior. This translates to the opportunity to virtualize network stacks on a per-jail basis, something that reminds me the OpenSolaris CrossBow Project.

Respect the developers time and wills!

I believe there is a strong understimation of Open Source work today: people complains more and more everyday without taking into account that who does Open Source is donating her time and efforts to the whole community.
Having stated that, it is really simple, in my very own opinion, that developers have the rights to work on what make them happy, not on what will make the users' happy.
Most of the time, the two side of happiness coincide.
In this scenario, before expressing rage against a group of developers without providing help, it is better to remember that you, the final user, did not hire the developers themselves, and therefore they are free to ignore you.
For a pratical example, see here  and if you have a little respect for yourself, never reply like this!

lunedì 16 marzo 2015

About programming competitions and selections

I found this very interesting post l about online competitions for computer programmers.
Taking part to the 2015 CPAN Pull Request Challenge, and agreeding totally to the post author about how useless these competitions can be, I want to enforce the concept.
Nowdays softwares are very complex beasts, and it is much more important to have a look at how to solve a problem in a way that can be maintanable, well documented, self explainatory, and so on, rather than having a bunch of code that performs the right computation with a strange and not well known alghoritm. Moreover, with the ubiquity of alghoritm libraries, I don't see the whole point in proving your knowledge of alghoritms anymore.

I remember at least three job interviews I made a few years ago when I was asked to solve a problem implementing a mathematical alghoritm. And I failed.
But I also remember at least two of the interviewers telling me they don't know git and FreeBSD, or don't knowing the difference between a log-shipping replication and a streaming one.
I tend to prefer to know a little about a lot of things, so to be able to choose the right tool at the right moment (and improve my skills on demand), rather than knowing a single tool/paradigm very well.

So I don't see the aim of having an online competition on alghoritms, or even asking anymore alghoritms (except if your business is based on those). Rather, I strongly believe that being able to prove you collaborated in FLOSS projects, have pushed changes and commits to real code makes you resume stronger.

My private library (2)

A lot of time ago I posted a photo about my private computer science library.
In these days I'm "refactoring" an old bookcase I got from my grandparent, and consequently I'm re-organizing my library.
I have to say that today I do not use printed books anymore, due to some problems I have reading, and therefore my own library is pretty much only on electronic devices nowdays.
But arranging the printed books on the shelf made me feel better than clicking on a Kindle button...


domenica 15 marzo 2015

PostgreSQL & friends

Ecco una interessante (e abbastanza completa) rappresentazione grafica della timeline di PostgreSQL:



E' interessante notare quante aziende e quanti prodotti si basano (o si sono basati) su dei fork di questo progetto.

lunedì 2 marzo 2015

PGDay.IT 2014: alcune foto

Grazie agli sforzi congiunti di Gianluca (che ha faticosamente recuperato le credenziali Flickr ormai disperse) e di Carlo (che ha prodotto e pubblicato il materiale, sono ora disponibili alcune foto dell'evento principale ITPUG del 2014.

Anche questo è un esempio di come, con un po' di sana collaborazione, sia possibile mantenere aggiornata l'associazione e la sua network sociale.

mercoledì 25 febbraio 2015

ITPUG & PostgreSQL al GrappaLUG

Segnalo con piacere che uno dei soci di ITPUG, Denis, terrà un corso gratuito (ma riservato agli iscritti del GrappaLUG) su PostgreSQL.
Il corso si articola in 5 lezioni serali di 90 minuti l'una, volte ad accompagnare il pubblico dall'installazione alle prime interazioni con il cluster, fino all'uso avanzato dell'SQL (stored procedures, CTEs, etc.) per concludere con backup logico e fisico e, perché no, un po' di replicazione e Point In Time Recovery.

martedì 24 febbraio 2015

2015 CPAN Pull Request: February pending

My February assignment was not a piece of cake: I got MyCPAN::Indexer, a module by the great brain d foy, yes the author of so many Perl Books, the launcher of the Perl Mongers, and...you know, pretty much a lot of the Perl world.
Ok, what chance could I have to comment and improve the code of brian?
It does not matter, I did my homework at my very best.
The first step was to understand what the module was doing, and I have to say that the documentation did not helped me a lot. Then I had to try the module by myself in order to see when and how to make it working. And it took me a while to understand what should I do.
Please note that the module is by itself what brian calls a modulino, so a module that can be invoked also as a standalone application.

The code layout is...well, let me call strange. It is something I will not use, and something my Emacs refuses to use quite well, so this made changes a little more difficult. However, I limited myself to change the documentation and fix a few dependencies (optionals), as well as compressing a branch. Nothing really interesting, some monkey patching, but hey, it was a little too hard for me to do in a single month!
But I'm really happy, even if at the time of writing my patch has not yet been merged, and even if it will never be. It has been a great opportunity to be forced to learn from a real Perl guru!

Thank you ITPUG

2014 was a very bad year, one I will remember forever for the things and the people I missed.
But it was also the first year I missed the PGDay.IT, but today, thank to the board of directors and volounteers, I received the shirts of the event.
This is a great thing for me, as being part of this great community.





A special thank also to the OpenERP Iitalia!

lunedì 23 febbraio 2015

PostgreSQL & DTrace @ FreeBSD

Tempo fa mi ero scontrato, o meglio scornato, con la compilazione di DTrace e PostgreSQL (o meglio, la compilazione di PostgreSQL con DTrace) su FreeBSD. L'ultima volta, dopo aver riportato i problemi nella lista pg-hackers, avevo anche aperto un bug sul tracker di FreeBSD.
Ad oggi il problema non sembra ancora essere risolto, forse che non sia così interessante usare DTrace + PostgreSQL + FreeBSD?

Pare comunque esserci una soluzione da provare, legata al caricamento dinamico dei moduli di DTrace nel sistema prima di iniziare la compilazione.

Sondaggio ITPUG

Come già anticipato in precedenza, ITPUG ha svolto un sondaggio "interno" mirato all'individuazione di eventuali criticità e punti deboli per il benessere dei soci stessi.
I risultati sono stati globalmente positivi: i soci sono contenti dello stato attuale dell'associazione anche se si evidenziano alcuni punti sicuramente migliorabili.
In questo articolo non voglio fare una trattazione dettagliata dei singoli risultati, che risulterebbe sia noioso che probabilmente troppo soggettiva, bens^ presentera alcuni grafici riassuntivi circa le risposte date ai vari quesiti. Per chiarezza ho cercato di dividere i grafici in sezioni correlandoli fra loro logicamente (ovviamente questa non è l'unica suddivisione possibile).

Vorrei ringraziare tutti i soci che hanno reso possibile, con il loro impegno, il questionario e la raccolta dei dati, in particolare Denis, Simone, Andrej nonché tutti i membri del consiglio (e sicuramente me ne sto scordando qualcuno).

Comunicazione

La Comunicazione ITPUG appare buona, ovvero i contenuti divulgati soddisfano i soci. Il mezzo di comunicazione preferito resta la mailing list, seguita da blog/planet, e personalmente ritengo che questo sia dovuto alla natura stessa degli associati che, essendo per la maggior parte dei tecnici informatici, si trovino a proprio agio con le mailing list. Per il planet invece penso che la ragione sia dovuta alla presentza del ben piu' noto planet.postgresql.org, aggregatore ufficialmente riconosciuto di notizie e novità nel panorama PostgreSQL.



E' interessante notare come ci sia una crescente richiesta di comunicazione verso aziende e d enti commerciali, probabilmente dovuta al fatto che si vuole far conoscere PostgreSQL maggiormente a livello Enterprise.

Update: 23 Febbraio 2015

Dauna discussione in mailing list soci pare che le motivazioni dietro alla necessità di parlare ad aziende relativamente PostgreSQL sia anche per sfatare il mito che certi prodotti possano girare solo con database ben definiti. Il classico esempio è quello di uno stack LAMP, che come sappiamo tende a girare con un MySQL.

Quasi scontato, almeno secondo me, la maggior parte dei contenuti che i soci vorrebbero essere trasfmessi sui canali ITPUG sono quelli tencici.

Velocità nelle risposte

Dato che la mailing list resta lo strumento principale di   comunicazione fra i soci, come sono valutati i tempi di risposta?
In generle i tempi sono ritenuti normali, e qui mi sarebbe piaciuto vedere di piu' un veloci, ma dobbiamo anche fare i conti con il fatto che, a mio avviso, ITPUG è una realtà piu' piccola rispetto ad una community internazionale PostgreSQL, ove i tempi di risposta sono sicuramente migliori.
Anche le risposti del consiglio verso i soci sono considerati normali, e anche qui personalmente avrei preferito veloci o addirittura senza la presenza di voci lente, visto che di interrogazioni verso il consiglio non ve ne sono molte.
Ma in enbtrambi i casi, vista anche la mole di ITPUG, ritengo che ci possa ritenere piu' che soddisfatti dei risultati riportati.




 Diffusione

Sul lato diffusione i risultati sono un po' piu' deludenti: mentre i soci conoscono il nostro planet ufficiale wwww.planetpostgresql.it non tutti i soci sono disponibili a partecipare al livello di diffusione atraverso esso o altri articoli.




Analogamente, la mailing list tecnica ad accesso libera messa a disposizione dal PostgreSQL.org è poco conosciuta fra i soci, e di conseguenza non viene utilizzata propriamente allo scopo per discussioni tecniche anche esterne l'associazione.

Sempre dal lato diffusione, pare che non sia ben chiaro che ITPUG svolge anche altri eventi oltre al ben noto PGDay.IT (ad esempio partecipazioni a conferenze di altra natura e locali, come LinuxDay, LinuxArena, ecc).



Update: 23 Febbraio 2015

Da una interessante discussione in mailing list soci è emerso che uno dei problemi legati al non voler contribuire attivamente con articoli, blog, ecc., è dovuto al fatto che ciò richieda impegno costante e tempo necessario per la scritura di un articolo di buona qualità. A questi fattori si aggiunge anche il fatto che spesso ci si trova, nel proprio ambito lavorativo, a fronteggiare problemi di nicchia (o non particolarmente diffusi) e quindi anche scriverci articoli a riguardo rischia di risultare come tempo buttato, perché poche altre persone si troveranno a fronteggiare la stessa situazione.
La mia opinione personale è che tutte le motivazioni di cui sopra siano sbagliate. Anzitutto non è vero che la diffusione di PostgreSQL richieda una cadenza fissa. Certo, è molto bello e comodo per tutti noi, come utenti, poter leggere notizie aggiornate a cadenza fissa, qualcosa di simile alla PWN. Ma ci si può accontentare anche di qualche piccolo articolo spot che mostra un trucco o una potenziale soluzione ad un problema. L'importante è condividire una base di conoscenza, sia essa certa o empirica. Riguardo al tempo necessario per scrivere, beh quello è indubbio. Ci sono persone che, fra i compiti del proprio lavoro, devono anche scrivere articoli tecnici, e altre (come il sottoscritto) che deve sottrarre il tempo ad altre attività. Eppure qualche minuto per scrivere anche un articolo o prepararlo offline si può sempre trovare. A volte basta commentare una nuova feature, o notificare al mondo una patch, o una nuova community...insomma, le idee per la divulgazione sono veramente tante.

Ma il punto sulla scrittura di articoli pone anche un altro aspetto in evidenza, piu' celato forse, ma che può riguardare il sito web istituzionale. Spesso ci si interroga se il sito web debba essere altamente dinamico e CMS-oriented, ma il dubbio tende a svanire se poi non vi sono contributi costanti da pubblicare. Anzi, a mia personale opinione, in tal senso un generatore di pagine statiche potrebbe semplificare e invogliare i soci a scrivere piccoli articoli/notizie/resoconti senza la complessità di una vera installazione CMS.

Una cosa che però mi sento di sottolineare è che tutti sono all'altezza dello scrivere un articolo e del contribuire alla diffusione della conoscenza, a qualunque livello. PostgreSQL (e soprattutto ITPUG) non è community fatta solo per gli esperti o i guru. Si può iniziare con dei baby-steps e si può crescere strada facendo. E senza la condivisione pura e sana nessuno di noi potrà mai definirsi un esperto.

Quota sociale

La quota sociale appare per la maggior parte dei soci appropriata, e a tal proposito lo sarà ancora di piu' ora che è stata ridotta per l'anno 2015.

Conclusioni

Sommariamente si può dire che ITPUG sta andando bene, e che rispetto a due anni fa, il lavoro svolto a portato i soci e l'associazione in uno stato di benessere generale migliore. Ma allo stesso tempo c'è ancora molto da lavorare e da fare soprattutto nel coinvolgimento attivo dei soci e nella diffusione delle varie proposte e iniziative.

L'augurio è anche quello di ripetere con frequenza sempre maggiore sondaggi di questo tipo, che possono fornire indicatori di valutazione circa il lavoro svolto nell'associazione stessa.


domenica 22 febbraio 2015

ITPUG interview

Thanks to the effort of some of our associates, we were able to perform a short interview to our associates themselves in order to see how ITPUG is working and how they feel within the association.
The results, in italian, are available here for a first brief description.
As a general trend, ITPUG is going fine, or even better of how it was going a few years before. However there is still a lot of work to do in order to spread the PostgreSQL word and to make our associates a little more involved in the community itself.
As a last word, I believe this kind of interviews should be performed on a regular basis in order to keep under control the work of the association and of its members.

Update: Feb 23
It seems that this kind of interview, and consequently the result inspection/analysys, generated a good discussion among the ITPUG members, that is I'm proud of thia other interesting result in the general management of the association. 

sabato 14 febbraio 2015

About job interviews...

I found this very interesting article on how to perform a good job interview in the case you don't want to hire a good developer. In other words, do we still need to focus on the merge-sort or alike to know that we are facing a skilled developer?

martedì 10 febbraio 2015

My story about KDE

KDE is by far the Open Source project I used longer than all the others, of course excluding Linux itself, a few shells and to some extent GNU Emacs.

I first met the Kool Desktop Environment (KDE) when I was a student at university. At that time I was spending my time digging into this Linux-thing, as well as learning a lot of new cool Unix stuff.
At home, I had just installed a Red Hat 5.2, that was shipped with what I believe it was a prototype of some Gnome applications (I remember the configuration system was a GTK application). There was no KDE desktop at all.
At the university, there were machines running Debian GNU/Linux (with Gnome) and a few Solaris workstations with the CDE.
I liked CDE, I really did! It was quite clean to me, and I found the multiple desktop a very interesting idea. I also liked the panel with multiple menus, or docks or whatever their name was.
But my Linux box at home looked ugly, with FWWM2 and WindowMaker as the more advanced "desktops" available.

Later that year, I don't remember how, I found a distro called Mandrake.
And everything changed.
Mandrake was based on Red Hat (if my memory serves me well the numbering was pretty much the same) shipped with KDE installed as default desktop.
I was impressed from such piece of code: I felt at home with such an UI.
There was a dock, there were applications and a file manager that didn't suck (well, the Gnome one sucked for a while in my opinion), a lot of ready-to-go applications.
The look and feel was great, and it seemed that developers did spent a lot of time and effort in making all consistent. As an example, I remember even the window title animated when the size of the window did not suffice to make it all readable.

Well, at that time I was not doing a lot of stuff using Linux. I was just experimenting and trying to get some documents well written using an office suite (it was too early for me to learn some LaTeX!). Internet was something quite obscure outside of the campus, and I spent a few money chaning my winmodem to an external one that allowed me to use KPPP to connect to the Internet.
My KDE desktop had a classic configuration:




KDE did play an important role in my Linux conversion: the command line is quite scarying to anyone who is just learning, and having a fully integrated desktop (even more integrated than others) allowed me to sit in front of my computer and concentrate on learning, being assured that if I cannot do something via the CLI the UI would have done it for me.

So my university life continued using KDE and some KDE applications (e.g., the debugger) assisting me in the small homeworks. However the university was not using KDE at all, preferring Gnome desktops on both Linux and Solaris; I suspect the choice was done due to some issues between the Free Software Foundation and the Qt-KDE, as well as some major distro (like Red Hat) offering and supporting Gnome.
I have to admit that at some point in time, a few Solaris machines started prompting the user for a CDE/KDE alternative, but many students excluding me were using CDE because that was what the university teached us.



At that time it was a lot easier to find some help related to the Gnome desktop than to the KDE one, and the latter was something not well appreciated here in Italy, or at least not as strongly pushed as it was in Germany. But a few distros started shipping KDE and KDE-only installations, such as SuSE and Caldera. Feeling a little less alone in using my desktop of choice, I sticked with it and continued to use and explore.

I believe a huge jump in quality was at the time of KDE 2, where the "component" model become more and more apparent to the bare user. I'm not talking directly at KOM (and Corba related stuff), even if that was the engine under the hood, but to the fact that an application could do a many things because they were simply available to the desktop as a whole. So for instance you could open a PDF file into a web browser, or use a web browser to see a directory tree (and not with the horrible apache-like web interface), and stuff like that. That was a kind of impressive cohesion amongst the application components.



In those years I did not have a broadband Internet connection, so the only thing to do in order to have a fully fresh KDE desktop was to wait for some magazines to issue with a set of CDs. And most of the time updating from such CDs was a pain, and that was how I become quite good in doing backups and restoting a working machine from scratch, but this is another story.
By the way, at a point in time I got a Red Hat 9 shipped with KDE 3, which was really cool. The look and feel, the applications and the themes were pure eyecandy. And how to forget the great work done by Mosfet (and the incident that occurs with Pixie)?
That was the time of my master thesis.




Having acquired more confidence in the desktop itself, as well as in the toolchain required to build it, I started to build my own versions. That was not so simple, since I had to download all sets using the Internet connection of my job place and then let the computer doing the hard work. My poor Intel Celeron was busy all day compiling KDE in background, and you cna imagine how responsive it could have been at peeks! But it did not matter, in less than a week I could have a new KDE version up and running on my system!
The 3.3 release, with the bubble titles and that great icons was really impressive to me.




I did that until version 3.5, and then, shame on me, I switched to Microsoft Windows for a while. That's because I was doing a kind of University project, and having university laziness on me, I didn't want to spend a lot of time in configuring my laptop.
Luckily, as when you got drunk, the bad effects go away sooner or later, and so I quickly joined back KDE. At that time a distro in particular was famous in the KDE panorama: Kubuntu. I installed 6.04, if my memory serves me right, and liked the idea of having a KDE-specific distribution. I still use Kubuntu on a lot of machine of mines.

The jump to KDE 4 was quite a shock, even for me that I have followed the decisions and improvements. However, once I got used to the new interface and look and feel, I was at home again. And of course, I was not expecting the big vendors shift, but was running 4.0 as soon as it was considered stable. I laugh remembering a colleague of mine that, in the act of emulating, get messed with KDE 4.0 and was forced to switch back to something he knew better. He does not know KDE a lot even today, sorry pal!



Even if the desktop was not so complete as in previous versions, the ideas behind it were really promising. In particular the widgets and SVG graphic were winners in my opinion (and in fact were "migrated" to other popular proprietary platforms).




In the last years my KDE-aggressiveness has calmed down, and now I follow only stable and mature branches, even if I'm always excited when I get the opportunity to get a new release.

And yes, I've not switched yet to Plasma 5...I was a lot busy at the time. But I'm going to try it very soon.

lunedì 9 febbraio 2015

Find Fuplicated Files (fdf.pl): a quick Perl script to the task!

I often find my MP3 player or photo repository filled with duplicated items, and of course that results in an annoying task of cleaning up the tree structure.
A few days ago I decided to write a very simple Perl script to get hints about such duplicated files. The script is quick and dirty, and do not aim at being a very performance one, even if it is working quite fine for me.
Here it is:

#!/usr/bin/perl

use strict;
use warnings;
use Digest::file qw( digest_file );
use File::Find;
use v5.10;

die "\nPlease specify one or more directories\n" unless ( @ARGV );

my $files = {};
find( { no_chdir => 1,
wanted => sub {
push @{ $files->{ digest_file( $_, "SHA-1" ) } }, $_ if ( -f $_ );
}
}, grep { -d $_ } @ARGV );


while ( my ($sha1, $files) = each %$files ){
say "\n\n#Duplicated files: \n\t#rm " . join( "\n\t#rm ", @$files ) if ( @$files > 1 );
}



The idea is quite simple: I use an hash (named $files) indexed by the SHA-1 hash of a file. Each file with the very same hash is appended into the same hash bucket, and therefore at the end of the story each entry in the hash that has more than one file name in the bucket reveals a duplicated file.

As you can see, I use the file File::Find method with the no_chdir option to strip down the file name in the code ref, so that $_ is the fully qualified file name. For each file, File::Find executes the code ref that tests if the entry $_ is a file and computes the hash, placing it as key of the $files hash with the name as first value.
The action is iterated over all the directiories supplied as script arguments, that are in turn filtered by grep to check about their directory-ness.

At the end, since I'm a little lazy, the script prints a list of Shell like rm commands to purge the duplicated files, so that I can choose the files and simply executes them.

lunedì 2 febbraio 2015

Autoscatto, questo sconosciuto

Miguel de Icaza ha pubblicato un post per me controverso in difesa del selfie-stick. Un selfie-stick altro non è che una "bacchetta" rigida da applicare al proprio telefono o fotocamera, il quale consente una impugnatura semplificata per l'autoscatto (selfie). Niente di nuovo per chi è abituato ai cavalletti da macchine fotografiche, che già da parecchi anni disponevano di una simile funzionalità.
Giustamente Miguel fa notare la necessità di una opportuna licenza delle fotografie scattete, cosa che giustifica l'uso della tecnica "selfie". E ancora piu' giustamente si fa notare come spesso si non voglia chiedere ad un passante di fare una fotografia con un dispositivo tanto personal come il proprio smartphone.
Ma per chi come me è cresciuto con la concezione dell'autoscatto (si, quando si metteva la macchina fotografica su un muretto, tavolino, o altro) e della foto fatta dai passanti, le argomentazioni di Miguel appaiono ridicole.
Possibile che la nostra società sia finita in questo baratro nel quale è piu' importante condividere la foto che il momento del farla?

domenica 1 febbraio 2015

Sondaggio ITPUG

In questi giorni, grazie anche agli sforzi di alcuni soci volontari, si sta svolgendo un sondaggio ITPUG mirato ad ottenere indicazioni e suggerimenti dai soci stessi circa l'attuale funzionamento dell'associazione.
La conclusione del sondaggio è prevista per il 15 Febbraio 2015, e a seguire verranno resi pubblici i risultati. 
Lo ritengo un passo importante per l'associazione, anche perché molto raramente sono state avviate delle vere e proprie raccolte di suggerimenti. E ancora piu' importante, questa iniziativa spingerà l'associazione verso una maggiore trasparenza nei confronti dei soci.

Il nuovo Planet KDE

Come risultato di un KDE-SoK (Season of Code), il planet KDE ha cambiato look and feel. Il nuovo planet risulta particolarmente gradevole, ed è un buon aggiornamento del famoso aggregatore dopo diversi anni di onorato servizio.
Buona lettura!

sabato 24 gennaio 2015

Perl, printf and qw to rescue!

When dealing with fixed/padded strings, nothing is better in my opinion of the printf family of functions.
However, the printf has a couple of problems when trying to format complex data, especially if compared to pack().
The first problem is that the formatting string could result very hard to read; for instance consider the following one:

qq(%-4s%1s%09d%1s%-50s%-50s%1s%08d%-4s%-16s%-100s)

The second problem is that it cannot handle easily errors in field types, and this often happens when cycling thru a file and formatting each line according to a specific formatting string. Consider again the above formatting string: what happens if the third field is not a valid number on each line of the file you are processing? Perl simply compains, or better, printf() complains about an error.

One solution I found that can help solving both problems is to dynamically build the formatting string from an array of single atoms. So for instance I specify the above formatting string as follows:

$format_specs = [ qw(%-4s %1s %09d %1s% -50s% -50s %1s %08d %-4s %-16s %-100s) ];

and then later I use something like:

printf join '', @{ $format_specs }, @fields;

Why should it be better than using a single pre-formatted string?
Well, first of all, having extracted each formatting pattern into an array allows for better readibility (I can even add comments to each atom to remember what it means). Second, and most important, I can check each field read from the input file and see if it comply the formatting atom. For instance, to check for a number:

for my $index ( 0..$#format_specs ){
  warn "Error on field $index, expected $format_specs[ $index ]\n" 
     if ( $format_specs[ $index ] =~ /d/ && $fields[ $index ] !~ /\d+/ );


Of course it is possible to build a more robust checking around each field, but the usage of an array of formatting atoms allows for a quick and iterative checking of the field nature, as well as ad-hoc error reporting.

venerdì 16 gennaio 2015

printf and the wrong attitude: an experience

I'm so used to the way the normal print operator works in Perl that I did a silly mistake using the printf function with the same attitude: a full list of arguments.

The buggy line was like the following:

printf $format_string, @data, "\n";

Can you see the error?
Well, the new line at the end is not likely to be printed, and it was not in my case. The problem is that everything that follows the format string is managed as an argument to the format string itself. Therefore, the format string must have a placeholder for the new line charaters, as in:

$format_string = "%d %s %c ....%s";

In my case I was not placing the last %s in the format string because I used the format string itself to manage how many data to extract from the array of elements, that is something like:

printf $format_string, @data[ 0..$hint_from_command_string ], "\n";

And in order to waste a little more time, I was trying to figuring it out on a terminal that was wrapping the line length where the new line should have been, making the illusion I was looking at separated lines.
Of couse using a good text editor or some tool like head revealed I was looking at something very different: a whole line.
And that helped me finding the bug and moving the new line character into the format string at the very last moment:

printf "$format_string\n", @data;

Shame on me!

mercoledì 14 gennaio 2015

CPAN Pull Request: Janaury done!

My first attempt in the CPAN Pull Request Challenge was the production of a patch listed here.
After a few days I asked the original maintaner to close the pull request, and then I submitted another one with the right set of commits here.
And it got merged!

Well, my pull request was quite small and, after all, easy. I would not define it entirely monkey-typing patch, but it is pretty much what I've done. And that is right to me: I'm not searching to demonstrate I'm a killer Perl programmer at the risk of breaking some well used module!

So what did I learn from my first pull request work?
A lot of things after all, and a lot of things that I should always keep in mind when collaborating to other projects.
The first thing is public embarassment: in my first pull request I did place a commit that was a wrong change to the module (see here). While doing it, I was sure that the original developers are smarter than me, but I was trying to simplify the code anyway and it seemed to me that using Exporter instead of an hand-written import method was the right way. But I was wrong!
And despite being wrong, I was teached an important lesson here: I should have checked outside of module (tests) to see who was using the import method in non-ordinary ways. And so I learned on how to use better grep.perl.org.
Another thing I learnt is that I should not produce more work for the original maintainer: each commit must describe well and in detail what the changes are, explaining also the motivations that lead me to such changes. This will be helpful for future referneces and discussions, and will speed-up the approval of the patch.
And of course, I re-learnt to use git branches. Each development should be made on a separated branch, and each branch should include only a set of related commits.

How did I worked on this pull request?
Being the module assigned to me quite simple (a single file), I started reading the source code and looking around for "well-known" problems. Warnings and hand-written import sounded good candidates for first fixes, and perlcritic can help at this stage. Of course, both of above do not require changes on a stable and deeply used module, so I had to throw away commits.
Then I read the documentation, finding that a few regular expressions were not matching what was in the docs, and therefore working around them to fix and to make the documentation coherent with the code. This is not simple and cannot be automated.
Each change was tested again the test suite, and here tools for test coverage can be very useful to find out other ways to improve the dist.

I'm happy to see I was able to produce a contribution, even if small.
And I'm glad to see I'm learning more and more things and methodologies.