mercoledì 8 febbraio 2017

How to destroy my fossil repository in one step!

Fossil and Git are two great softwares, and I use them day by day.
Unluckily the former has less support by integrated environment, IDE, and this makes a little easier to deal with Git when working with mainstream development frameworks. But luckily, Fossil has a way to export to Git and, much more itneresting, to do a bidirectional import/export that is to export and re-import a git repository. In other words, you can work on a repository with both git and fossil pretty much at the same time.

Today I decided to realign my fossil repo to a git one, so to have the same logs and timeline available both from command like (i.e., fossil) and IDE tools. But I messed up everything:

fossil export 
  --git 
 --export-marks /sviluppo/fossil/luca.fossil.marks 
  /sviluppo/fossil/luca.fossil
  | git fast-import 
    --export-marks=/sviluppo/fossil/luca.fossil

Who catch the error?
Well, the git mark points to the fossil repository file, not the mark file!
Boom!
A whole repository destroyed in a few seconds.

The only thing that can save in such a situation is a backup, but, shame on me, I didn't have a fully recent one, so I lost part of the history.
Lesson learned: always do a backup before acting on a repository, even if you are supposed to only read from it (as in an export phase).
Lesson learned: do not trust the shell to complete paths and filenames for you.

Oracle SQL Developer: crash all'avvio

Ahimé mi sono trovato, non so per quale motivo, a non avere piu' funzionante il mio Oracle SQL Developer 4.15 su una macchina ubuntu 16.10.
Il problema era un crash, spesso immediato, casuale dell'applicativo, con messaggi e frame di errore ogni volta differenti:

Oracle SQL Developer
Copyright (c) 1997, 2015, Oracle and/or its affiliates. All rights reserved.



LOAD TIME : 286#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0xe76ab451, pid=23989, tid=0xa82ffb40
#
# JRE version: Java(TM) SE Runtime Environment (8.0_111-b14) (build 1.8.0_111-b14)
# Java VM: Java HotSpot(TM) Server VM (25.111-b14 mixed mode linux-x86 )
# Problematic frame:
# J 6797 C2 oracle.dbtools.util.Array.merge([I[I)[I (225 bytes) @ 0xe76ab451 [0xe76ab2c0+0x191]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/luca/Downloads/sqldeveloper/sqldeveloper/bin/hs_err_pid23989.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
#
/home/luca/Downloads/sqldeveloper/sqldeveloper/bin/../../ide/bin/launcher.sh: line 1286: 23989 Aborted (core dumped) ${JAVA} "${APP_VM_OPTS[@]}" ${APP_ENV_VARS} -classpath ${APP_CLASSPATH} ${APP_MAIN_CLASS} "${APP_APP_OPTS[@]}"


o dei blocchi all'avvio:

LOAD TIME : 259Uncaught error fetching image:
java.lang.NullPointerException
at org.netbeans.modules.netbinox.JarBundleFile$CachingEntry.getInputStream(JarBundleFile.java:342)
at org.eclipse.osgi.framework.internal.core.BundleURLConnection.connect(BundleURLConnection.java:53)
at org.eclipse.osgi.framework.internal.core.BundleURLConnection.getInputStream(BundleURLConnection.java:99)
at sun.awt.image.URLImageSource.getDecoder(URLImageSource.java:127)
at sun.awt.image.InputStreamImageSource.doFetch(InputStreamImageSource.java:263)
at sun.awt.image.ImageFetcher.fetchloop(ImageFetcher.java:205)
at sun.awt.image.ImageFetcher.run(ImageFetcher.java:169)



Preso dalla disperazione ho cercato di ovviare con la perspective Database di Eclipse, ma questa è a mio avviso molto insoddisfacente per l'interazione con il database (mentre è valida per lo sviluppo iniziale). Così ho dovuto cercare un modo per aggiustare sqldeveloper, e la soluzione è apparsa molto semplice: rimuovere la variabile di ambiente GNOME_DESKTOP_SESSION_ID:

% unset GNOME_DESKTOP_SESSION_ID
% sh ./sqldeveloper.sh

e io non ho nemmeno Gnome installato (ok le librerie, ma avere un session-id non me lo sarei mai aspettato).
Comunque dopo un po' il problema si è riverificato insistentemente, così ho applicato le seguenti due modifiche al file ~/.sqldeveloper/4.1.5/product.conf:

SetJavaHome /usr/lib/jvm/java-8-openjdk-amd64
SetSkipJ2SDKCheck true

Haiku talk

François Revol ha reso disponibili un set di slide molto interessanti circa il suo talk a Fosdem 2017.
In particolare in queste slide si mostrano alcune delle caratteristiche di Haiku che lo rendono un sistema molto avvincente, anche se a mio avviso ancora molto immaturo (sia per la community, ridotta, che per le integrazioni con software pensati per altri sistemi operativi).
Una delle carattistiche del desktop che personalmente troverei piu' utile è la X-ray navigation.

domenica 5 febbraio 2017

Don't write your own template engine!

We all have done it at least one time: we designed our very-own sucking not-scalable template engine!

I remember when I was a young stupid developer, during my very first job as a contractor, I had to extract some data out of a database (not relational!) and to produce a well formatted PDF letter.
At that time my knowledge was really tiny, but luckily I had in my bag of tools both Perl and LaTeX, so I decided to use both of them (and by the way it was years before OpenOffice too).

As I said I was young, and therefore as pretty much every "green" developer does, I refused to search for and reuse some production ready module or extension (Template Toolkit, just to name it), considering myself enough smarter to develop each required functionality by myself.

So I started developing a Perl engine able to read an INI-like file format: I was extracting data out of the database and placing each record as a set of sections into the file, with special sections used to control the engine itself. Having to deal with text, why not using Perl?
Having done the above, the second phase consisted of producing LaTeX compatible output and run a controlled process to compile the output and produce the result. Again, Perl could do of course the job.

And you can see a recurring error here: as I was not using an engine template on the "reading" side (parsing my very own INI-like file format), I was not using it in the "writing" side too (outputting each single LaTeX command one by one).

So far, so bad, I had a fully working system in less than two days (thank you Perl for regexp!), and my boss was happy with that, the check was in the incoming mail, and I was enough happy too.

I have to confess that the script resulted in around 500+ lines, due also to the fact I was not using "short" comparison operators as I do today (thank you '?:'), and of course maintaining the script was a really pain.

Anyway, I learnt the lesson and since those days I never started developing my very own template engine unless really forced to.
A few years after I had enough experience to recognize the very same error I did in code provided by other developers, and this let me thought that the problem was a cultural one. I cannot say if it is due to impatience or a lack in the preparation of young developers, I tend to balme the latter most since, having worked a few years in the academic field, I had seen too less time spent explaining the importance of a language ecosystem (thank you CPAN!).

sabato 4 febbraio 2017

sysadmin panics: usare X dal terminale...

Il protocollo X-Window, con tutte le sue pecche di sicurezza ed efficienza, risulta comunque uno strumento molto utile per il lavoro da postazioni remote.
Fortunatamente Unix basa tutta la sua configurazione su file di testo, ma a volte questi sono veramente difficili da editare per un umano. Capita allora che per aggiornare il vecchio printcap si usasse una utility grafica. Si noti che sto parlando di printcap, quindi l'era prima di cups, tmux e altre utility che hanno agevolato ulteriormente il lavoro remoto.
Ma non è questo il problema, il vero problema è che il sysadmin inesperto non sa di poter usare X a suo vantaggio. E così, invece che far eseguire l'applicativo grafico sulla macchina remota e visualizzarlo sulla propria, scende alcuni piani di un edificio per andare fisicamente ad agire sulla console della macchina!
Nulla di grave, se non fosse che quando non si trova fisicamente nell'edificio il sysadmin deve prendere la macchina...

perlbrew will have a little of me!

I had a little time to study one of my favorite Perl too: perlbrew
While reading the source code I decided to add a little extra-information on the output of the available command. Such command provides the list of available (i.e., downloadable) Perl versions, but not from where it is going to download them. 
Therefore I applied a patch to show to the outer world the donadloadable links.
And after a few days and a little embarassement for a forgotten failing test, it has been merged!

Hey students, don't buy the teachers' lies!

When I was an university student I had a teacher in the Operating Systems subject that teached me (and a lot of others) to program so bad in shell scripting! I will not name her, but I had to say that today I find the very same errors around the scripts that my colleagues write every day, and this is a kind of watermark of the damage she did.

Luckily I find my way out, studying on other books and practicing on my own (Linux) computer.

So what were the problems?
To understand it must be clear the exercise schema the teacher was adopting, and that was pretty much always the same: a main script (let's call "coordinator" which aim is to parse the argument list and invoke, using recursion, a "worker" script.
Something that can be shown as the following code:

#!/bin/sh
# coordinator

# argument validation...

# export the current directory
# in the path to invoke the worker script
PATH=$PATH:`pwd`
export PATH

# first call of the worker
worker



#!/bin/sh
# worker

# recursion on myself
for f in *
do
if [ -d $f ]
then
worker f
fi
done

# do other work...



The first problem, in my opinion, is the usage of relative paths to invoke the worker script, and therefore the need for exporting the PATH variable. First of all, launching a script with a relative path makes it a little slower to launch, since the shell itself has to search for the script against each PATH entry. Second, and much more important, it is the key to exploitation: not having the control over the full path of the script it is possible to inject a malicious script somewhere in the PATH and use it as a worker.
When I objected the above to the teacher, the answer was to simply invert the PATH manipulation order:

PATH=`pwd`:$PATH
export PATH

But again, this is a kick in the ass of security: what if I name my script as another system wide command? I can alter the behaviour of this and other programs...
So what is the solution? Of course invoke the worker script with an absolute path and to not manipulate the PATH variable. After all, what is the point in showing (to the teacher) you can export a variable?

Another problem is the recursion on the worker script: usually such script was scanning a directory content, invoking itself each time a subdirectory was found. Now, while this can work in theory, you can easily imagine the worker script becoming a fork-bomb. It is quite easy to see how find(1), xargs(1) and friends can help in this situation.

Another oddity that comes into my mind is the way students were forced to test if an argument was an absolute path or a relative one:

case $1 in
/*) # absolute
;;
*) # relative
;;
esac


Do you believe the above is easy to read? Is it efficient and does it scale well? Why not using Unix tools and pipes, regular expressions and awk? even better, getopt anyone?

So, dear ex-teacher, what is the whole point in teaching such shit?