Friday, August 08, 2008

Got Religious

[this is the only religious post you will ever encounter on this blog]

After an excessive amount of thinking, reading and
weighting pros and cons I came to the final conclusion that
actually and deep down in me, I beleive that
1. God exists
2. Christianity makes sense

Many, many arguments exits against 1 and also 2.

But as soon as you accept 1. arguing against 2 becomes
really hard. All objections brought into field against 2. failed to
convince me finally; ok some of them convinced me at first,
and it took years to find out that they were errornous.
Many objections seem clever at first, but really contain
discrepancies or stem from pure ignorance.
Most of those uttering them, do not really reflect
these objections very well. A nice example is
the following "objection" against 1. and partly 2.:
"If god was almighty, could he create a
stone so heavy, that he can't lift it?"
It takes a bit of thinking, but is actually easy to debilitate. I will
leave that as an exercise to the reader, I really don't want to
take the fun out of this; I am sure, that also atheists will
see the inherent logical problems in this objection :)

I think that most religions can be unified to a common theme,
I therefore believe, that some religions contain an abstract
common base, that is below a thick tangible layer of
"stories" - I prefer christianity, as it is the only one
I know just good enough, to see how deeply consistent and
consistent everything is. Although some peices seem odd or
paradox at first, they make sense if you dig deeper and
think about them...

Regarding 1. How could a reasonable scientist exclude
the existence of god, without leaving many
questions open, and without entangling themselves in inconsistencies?
Well even if my ignorance/stupidity is not a
convincing argument that 1. is true, and you believe that
something in this world *whatever that is* exists, you end up
beleiving in some immaterial properties - one way or the other.
I beleive that these properties exist (and might even be corner stones of a
creation done be an intentional, allmighty being, which some call god) and are
basic expressions of the Holy Ghost - as are the brain brain activities,
that ask themselves now, wether this make sense or not.

So ... please do not assume that I will bother anyone with my personal beleivesystem
anymore, I just wanted share these thoughts, that's all I'll ever write about that
topic.

Thursday, August 07, 2008

Erlang code snippet....

Code like that:

blah(X) ->
X1 = f(X),
X2 = g(X1),
X3 = h(X2),
i(X3).

is very error prone, and worse very hard to extend. Just adding another
function application between f(X) and g(X1) requires you to alter all lines
upto the end of the function:

blah(X) ->
X1 = f(X),
X2 = j(X1)
X3 = g(X2),
X4 = h(X3),
i(X4).

This causes 2*numberoflinestoendoffunction potential error sources;

And this is also not at all a very functional style.
Why not rewriting it like this:

blah(X) ->
(chain([
f,
g,
h,
i
]))(X).

where one chains together the functions. Adding a function invocation in between is as easy as pie.



chain(FunList) ->
lists:foldl(
fun combine/2,
fun id/1,
FunList).

id(X) -> X.

combine(F,G) ->
RF = to_local_function(F),
RG = to_local_function(G),
fun(X) ->
RF(RG(X))
end.

to_local_function(F) ->
case F of
{FM, FF} -> F;
_ when is_atom(F) -> {?MODULE, F}
end.

Another solution might be using the state monad, wich was
built for dealing with this kind of functional composition.

Monday, August 04, 2008

Lazy Evaluation(there be dragons and basement cats)

Lazy Evaluation and "undefined"


I am on the road to being a haskell programmer, and it still is a long way to go. Yesterday I had some nice guys from #haskell explain to me lazy evaluation.

Take a look at this code:

Prelude> let x = undefined in "hello world"
"hello world"
Prelude>

Because of Haskells lazyness, x will not be evaluated because it is not used, hence undefined will not be evaluated and no exception will occur.

The evaluation of "undefined" will result in a runtime exception:

Prelude> undefined
*** Exception: Prelude.undefined
Prelude>


Strictness


Strictness means that the result of a function is undefined, if one of the arguments, the function is applied to, is undefined.
Classical programming languages are strict. The following example in Java will demonstrate this. When the programm is run, it will throw a RuntimeException, although the variable "evilX" is never actually used, strictness requires that all
arguments of a function are evaluated before that function is applied to(called with) these values:



public class Strictness {
public static void main(String[] args) {
int evilX = 1/0;
report_result("sven", evilX);
}

private static void report_result(String name, int x) {
System.out.println("None cares about what "+name + " actually calculated...");
}

}

When executed, the output of the program is:

Exception in thread "main" java.lang.ArithmeticException: / by zero
at com.blogspot.sheyll.strictness.Strictness.main(Strictness.java:10)


A non-strict Haskell equivalent:

module StrictnessFP where

report_results name x = putStrLn $ "None cares about what " ++ name ++ " actually calculated..."

main = let evilX = 1/0 in
report_results "Sven" evilX


When executed the following output appears:

[1 of 1] Compiling StrictnessFP ( C:/Users/sheyll/workspace/StrictnessFP/src/StrictnessFP.hs, interpreted )
Ok, modules loaded: StrictnessFP.
*StrictnessFP> main
None cares about what Sven actually calculated...




Enforcing strictness


Sometimes it makes sense to enforce strictness, in order to optimize the runtime performance. Lazy evaluation means that every value is encapsulated in a thunk, a
function without paramaters, which is passed around instead of the actual value, and which is evaluated if needed. This might cause some overhead, compared to passing around the value directly.
Another point to note is that evaluation order is undetermined, values are evaluated as needed, and therefore only as consequence to the interaction with the real world aka the IO Monad.
In Haskell there is a standard operator to enforce strictness:

seq:: a -> t -> t

With the semantics:

seq a t = if a /= undefined then t else undefined

THIS CODE WILL NOT RUN because the result of comparing something with undefined is always undefined, still this illustrates the idea, that seq will check(evaluate and throw away the result) its first parameter and only return the second parameter if the first did not evaluate to undefined.
Why is this construct usefull?
In case where we have the following structure, and want to optimize the performance and/or space requirements:

complex_calculation x = seq x (... x ....)

"seq" therefore ensures the order of evaluation, that's why it is called "seq".

The following code will output the string "None cares about what Sven actually calculated...":

report_results name x = putStrLn ("None cares about what " ++ name ++ " actually calculated...")

main = report_results "Sven" undefined

Whereas the next version, that incorporates "seq", will throw an Exception at runtime:

report_results name x = seq x (putStrLn ("None cares about what " ++ name ++ " actually calculated..."))

main = report_results "Sven" undefined




Evaluation means reduction to weak head normal form


Simon Peyton Jones, one of the greatest Haskell-Hackers out there, coined the term
"Weak Head Normal Form"(WHNF).
In contrast, the "Head Normal Form"(HNF) requires, that during execution an expression must be reduced until it cannot be further reduced, i.e. all applyable lambda expressions must be applied.
Whereas the WHNF is reached as soon as a lambda expression is achieved, even if it could be reduced further till finally the HNF is reached.
Requirering reduction only to WHNF allows the program to stop the process of reduction, which actually is the process of executing the program, when a value is passed to a function, also if the value ist yet to be calculated by the application of another function.
Requirering reduction to WHNF instead of HNF also allows the reuse of computation results and also helps a lot when implementing "lazy evaluation" in the compiler, because an algorithm called graph reduction can be used.
Please search the net for weak head normal form.
This concludes the theoretical part of this post. The next chapter will deal with
a very practical issue:

There be Dragons: Stackoverflow!



Everyone loves these two higher order functions foldr, foldl.
Here is a possible definition of foldl that's close enough to reality:

my_foldl f a (b:bs) = my_foldl f (f a b) bs
my_foldl _ a [] = a

If that definiton is used in the function below, we get a stack overflow exception:

stack_overflow = my_foldl (\x y -> x + y) 0 [1..1000000]

Because the x+y operations are contained in thunks, that pile up until the stack is full, because they are not evaluated! If they were, all that needs to be on the stack is a single integer, the result of each addition.

For exactly that purpose a special version of foldl exists called " foldl' ", it's basically ....

A Strict "foldl"


The Data.List module of the Glasgow Haskell compiler contains a function called
foldl' that has (roughly) the following definition:


my_foldl' f a l = mfl a l
where mfl a [] = a
mfl a (x:xs) = let a' = f a x in a' `seq` mfl a' xs


The variable a' contains the thunk for the computation f a x, and seq forces a' to be evaluated which means that the thunk called a' will be evaluated - which forces the application of f to a and x.

In this example f is a lambda expression containing the addition of x and y, in fact
it is similar to the expression that caused the stackoverflow:

no_stack_overflow = my_foldl' (\x y -> x + y) 0 [1..1000000]

- only this time there won't be a stackoverflow:

*StrictnessFP> no_stack_overflow
500000500000


So you think you're save now? Be aware, cause

basement cat is in yar toople!!11oneeleven



What do you expect to be the outcome of the evaluation of basement_cat, as defined
here:

basement_cat = my_foldl' (\(x, y) i -> (i, y + i)) (0,0) [1..10000000]


The surprising result is a stackoverflow, although we are using the strict version of foldl.
The stuff piles up on the stack again, because the tuple construction (i, y+i) adds another layer of laziness ontop of the computation. The tuple itself is evaluated because we use the strict foldl, but the contents of the tuple are laziely chilling in thunks, that will not be evaluated until explicitely forced!

The output of the basement_cat:

*StrictnessFP> basement_cat
(10000000,*** Exception: stack overflow


Note how the first element of the tuple can still be evaluated, but not the second.

So how can one get the desired result? The solution is to use the strictness operator "seq":

ceiling_cat = my_foldl' (\(x, y) i -> let y' = y + i in y' `seq` (i, y')) (0,0) [1..10000000]

When executed the interpreter outputs:

*StrictnessFP> ceiling_cat
(10000000,50000005000000)


Write unit tests. Always. Even in Haskell.


So the conclusion for me is that lazy evaluation offers greate benefits:
* optimization potential through graph reduction
* declarative programming style i.e. thtough infinite lists
* recursive functions and datastructures

On the other hand one has to watch out for stack overflows and strictness issues when using i.e. tuples and lists, and all data constructors, as well as newtype contructors which contain tuples. It is essential to pay special attention to these
issues, because they might be not very obvious, at least for a beginner.

Further more the use of the strictness operator should be clear; it should be noted that it is not easiely possible to implement seq in haskell itself. I think it must somehow be built-in.
As the name already suggests seq fixes the order of evaluation:

a `seq` b

Means first eval a then b.

I am convinced, that especially the beginner needs to write unit tests; even in Haskell there are some non-trivial aspects that might cause unexpected behaviour.

Friday, August 01, 2008

BlogPostAnswer.hs

{-

Intro


-}
module BlogPostAnswer where

import System.Environment

{-

This is haskell source code, which relates
to a nice blog post of someone comparing
java with python.

Original post



I picked some of the python-java comparisons
from that post, which I would like to
extend in order to contrast java and python
with haskell.

You can actually use the complete post as a
haskell source code file. All prose is
embedded in "{-" and "-}", wich are the block
comment tokens in haskell syntax.

I am a beginner haskell-hacker, and I dont have
a clue about python, so enlighten me, if you spot
a mistake.

Like python, haskell allows you to use indenting
as part of the syntax, unlike python however, you
also have the choice of using "{", "}" and ";"
instead of, or combined with, indentation.

Also, like in python, and unlike java you don't need
to put type declarations into you source code.

The step from java to haskell is
not the step from a statically typed language
to a dynamically typed language. Neither is java
really statically typed, nor is haskell
dynamic.
The opposite is true, haskell has a far stricter static
typesystem than java has, but it uses a variation
of the Hindley-Milner typesystem wich includes
type inference - This effectivly frees one from
writing type specs in most cases.

Haskell has a rich typesystem with all kinds of
good stuff. For someone who knows
C++ template programming, learning haskell might be
easier than learning java 5 generics.

I will try to give code examples in haskell, that
roughly correspond to the java and python code examples
from the blog post.

Plase aquire ghc and load this blogpost as haskell file.
Every piece of code shown here, is complete and can be
executed in the interactice haskell shell called ghci.

Assuming ghci is started in the director where textfile
with this source is located, the following commands
should be typed at the ghci prompt in order to compile and
load the code:

> :l (name of file)

(this loads the code)

Now you can call every function in this source code from the prompt
by writing the name and the arguments of that function seperated by
spaces.
i.e.

> test_matches_2

to run the "main" function, you must compile the file outside the interactive shell and
start the programm from the commandline like this:

bash $ ghc --make (nameofsourcefile)
bash $ ./(nameofsourcefile) "hallo welt"

Again, let me remind the reader, that I am not
a haskell expert, and that there might be far
more elegant ways to do it.

ok, let's get started:

Observation 1: Haskell is a unique mixture of readability and brevity



Java:

public class Test
{
public static void main(String[] args)
{
for (String s : args[0].split("\\s+"))
System.out.println(s);

System.exit(0);
}
}

Haskell:

-}

main = getArgs >>= mapM putStrLn . words . head

{-

This examples pipes a list of command line parameters into the
functional combination of three functions that do the following in the
order they are listed:

  1. takes the first element and ignores the rest: "head"

  2. seperates this element into a list of words of wich the first element consists

  3. write each word seperated by a newline to the stdout


As you may have noticed:

  1. ">>=" pipes data

  2. "." combines two functions to a new function

  3. "mapM f" call the function "mapM" with the parameter "f"


Haskell function application syntax is very similar to the
typical unix shell function invocation syntax.
The name of the function and the arguments are put in a line
next to each other seperated by spaces instead of brackets.


Observation 2: No need for set/get methods in Haskell



One purpose of the 2nd code example from
the above mentioned blogpost is to show off
a nice python feature needed in a language
with mutable state and oo features like instance
variables.

Haskell has neither mutable variables nor objects with variables.
Haskell is a pure funcitonal programming language,
meaning that like erlang, haskell has no notion of
variables that may be changed after being bound.
The typical OO-Class system, having classes with
i.e. setters for properties is not possible in
haskell without rolling your own little object
system (wich isn't hard at all, but also not
necessary).

Haskell provides other means for grouping data.
One way is to use algebraic data types:

Observation 3: Haskell has useful constructs Java lacks


algebraic datatypes


Algeraic data types are those constructs that start with the reserved keyword "data".
They create symbols associated with a place to create new structural type info associated
with that symbol; with the hint that "|" means "or" and that all words starting with capitals
are symbols wich we associate witch structure and meaning, and that everything starting with a
lowercase character is a (type-) parameter, have a look at the definiton of the datatype definitons in the follwoing;


-}

data Point2D = Point2D Double Double deriving (Show)

{-

this defines a data type wich
contains to floats: a 2-D point...
before we go on, let's define another data type,
but this time one that is
parameterised with another type.
Comparable with C++ templates:

-}

data Triangle p = Triangle p p p deriving (Show)

{-

now we create a simple alias
for 2D triangles (typedef in C++):

-}

type Triangle2D = Triangle Point2D

{-

Confused now? good, we can move on...

Our Point2Ds can be created and read.
Their contents (the two floats) are
readonly. This of course doesn't bother
us at all, it turns out in the long run
to be a really nice feature
concerning readability of code.

Assume the following function that generates
a "circle" made of n points with a radius of r:

-}

generate_circle r n = [Point2D (calc sin i) (calc cos i) | i <- [0 .. (n - 1)]]
where calc op i = r * op ((i/n) * 2 * pi)

{-

which looks like this in Java:

class Point2d {
private final double x;
private final double y;

public Point2d(double x, double y) {
this.x = x;
this.y = y;
}
/* ... some getters... */
}

public final class GenerateCircle {

final Point2d[] points;
final double r;
final int n;

public GenerateCircle(double r, int n) {
points=new Point2d[n];
this.r = r;
this.n = n;
for ( int i = 0; i < n; ++i) {
points[i] = new Point2d(calc_sin(i), calc_cos(i));
}
}
private float calc_sin(int i) {
return r * Math.sin((double)i/n * 2.0 * Math.PI);
}
private float calc_cos(int i) {
return r * Math.cos((double)i/n * 2.0 * Math.PI);
}
/* ... some getters ... */
}


pattern matching and guards



Pattern matching is syntactic sugar that frees one
from writing complicated if and switch constructs.
Combined with algebraic data types, a feature that java lacks for sure -
it is, as we have seen above, a way to simulate generic methods.

Another useful construct is the function guard;
let's look at a practical example:
the programmer of the display function
may use a simple constraints(a guard) to protect
the precious display function:
(the guard is the boolean expression
between the "|" and the "=")
Now if in that example x or y is less than 0 a runtime exception
is thrown, with detailed information where and why the error was caused.(Try it already!)

-}

display_list primitives = [display p | p <- primitives]
where display (Point2D x y)
| x >= 0 && y >= 0 = show (Point2D x y)

{-

All we do now is add a super type for primitives
and pimp our "display_list function a little to make
use of pattern matching in conjunction with
algebraic data types(see below):

-}

data Primitive = Point Point2D | Tri Triangle2D

display_primitives :: [Primitive] -> [String]
display_primitives primitives = [display p | p <- primitives]
where
display (Point (Point2D x y))
| x >= 0 && y >= 0 = show (Point2D x y)
display (Tri (Triangle p1 p2 p3))
= show (Triangle p1 p2 p3)

{-

higher order functions, lambdas, currying and sections



When it comes to functions, lambdas and closures haskell really shines.
In haskell everything is done with functions. Let's have
a look at a small example.
Note that this example also makes heavy use of algebraic datatypes and pattern
matching.

-}

data MyTree t = Leaf t | Node [MyTree t]

{-


In Java this would look like this:

public interface MyTree<T> {
public class Leaf<S> implements MyTree<S> {
public S data;
}
public class Node<S> implements MyTree<S> {
public MyTree<S> childs;
}
}


This results in a tree with data in the leafs. Let's define a matches function,
that returns a list of "ts" that match a user defined predicate(or matcher function):

-}

matches matcher (Leaf t) = if matcher t then [t] else []
matches matcher (Node childs) = [e | child <- childs, e <- matches matcher child]

{-

well that's it! The first clause of the function "matches" is applied only to Leafs
and the second clause uses list comprehension to call "matches" recursivly
for every child node.

In java is considerably harder to write (and to read).

Now let's leave this planet, and let's head off into a galaxy far far away...
Lets write a matcher that checks that the first letter is a capital "A", but let's
do this in two steps, lets first create a "startsWith" function that you should be
familiar with from other languages, and after the a function using the first one,
that checks for the capital "A".

-}
startsWith str = (( == str) . (take (length str)))

{-

this can actually be translated to english:
"combine two functions: (this is the "." operator)


  1. A function that compares something for equality with str, this is "(== str)".
    I used a fancy haskell feature here... (==str) is actually function call
    to the function "==" which needs to arguments and returns True or False.
    We gave the function only one argument "str". Of course the "==" function
    cannot give us a Boolean value for that! But instead it gives us a new function,
    a closure, that takes only one argument and returns a Bool. Isn't that cool?!
    I mean once you get used to that, it's really readable and you will never want
    to go back!


  2. A function that returns a sublist of a list containing only the the first
    "(length str)" elements: "(take (length str))"
    BTW, a string in haskell is a list of characters, and "length" returns the number
    of elements in a list.


Observe that startsWith takes only one argument not two! You may wonder how startsWith
can work, because it needs the string to search for and a string to search in.

Recall the weired (== str) construct. What happend here happens again: we created a new
anonymous function by calling a function named "==" with not enough arguments, btw.
this is called currying(- invented by the logician "Haskell Brooks Curry")
We also called "take" wich actually needs two arguments with only one argument
(the string length). The result is therefore a function the takes a string and returns
the desired sub-string with the same length that "str" has.
Now the combine operator (".") combines the two functions that each take one argument
and creates a new one such that the new function takes the input string for (take ...)
and returns the result of "(==str)" called with the result of "(take ...)";

Let's forget about (==str) and (take...) for a while and let's look at the definition of
".":

(f . g) x = f(g(x))

this might look familiar...
Now it should become clearer what the result of calling the startsWith function with
ONE parameter is: a new function that takes a string and returns a Bool indicating
wether or not the string given to that anonymous function starts with the string!

Once a programmer has learned very few core concepts about the function handling,
haskell source code is very readable, although it is often less terse than java code.

Also note that our startsWith function is perfectly generic, it can be applied to
ANY type of lists not just strings wich happen to be a synonym for lists of "Char"!

Now I do not want to write down equivilant code for that in java, I think it is
possible. Also note that I do not want to judge the qualities of languages,
I just want to study them.

Ok, back to topic, let's create a test function that roughly corresponds to the one in
the original blogpost.

Remeber that we wanted to define a second function in terms of the first function,
that checks if a string starts with a capital "A"?
Here we use currying again, and simlpy and clearly write:

-}
startsWithA = startsWith "A"

keywordTree = Node [ Node [ Leaf "Hallo"], Leaf "lieber", Leaf "Allerwertester", Node [Node [], Node [Leaf "Wie"], Leaf "gehts"], Leaf "dir"]

test_matches_1 = matches startsWithA keywordTree

{-

oh, let's not forget to introduce a lambda!
the greek symbol for a small lambda actually looks alot like the "\" (backslash) charakter,
so in haskell a lambda is defined like this:

\x -> x + 2

in python:

lambda x: x + 2

another way in haskell:

(2 +)

A lambda expression with two parameters can be written in haskell like this:

\x y -> x + y

or simply:

(+)

another way to write (==str) using a lambda:

\x -> x == str


Ok, back to the tree example,
let's use a lambda expression as a parameter for "matches"
to filter all words shorter than 5 characters.
The then the complete expression looks like this:
  
-}

test_matches_2 = matches (\x -> length x <= 5) keywordTree

{-

of course we can assign the lamda expression to a name, and use that instead.

-}

shorter_than_5_chars = \x -> length x <=5
test_matches_3 = matches shorter_than_5_chars keywordTree

{-

This can of course be written with haskells usual function definition syntax:

-}

shorter_than_4_chars x = length x <= 4
test_matches_4 = matches shorter_than_4_chars keywordTree

{-

And even more compact using functional compisition and sections:

-}

shorter_than_6_chars = (<=6) . length
test_matches_5 = matches shorter_than_6_chars keywordTree

{-

shorter_that_6_chars is written in point-free style, that means
that in contrast to the previous definitons we didn't explicitly name
the argument wupped around in the function, instead we just combined
the the functions.
This is very smilar to the pipe syntax in the bash:

$ cat /proc/cpuinfo | grep bogo | awk '{print $3}' | sort -n

Here too no variable is mentioned, that actually contains the current line
that is passed around.
A functions stdin is its parameter and a functions stdout is its return value.
That's also why it is so important in haskell to have currying automatically
everywhere, because it allows me to use a function with more than one
parameters as a function with one parameter wich returns a new function,
wich takes one parameter and returns a new functions wich takes one parameter ... etc
until final we get the actual result of the computation.

Factories?


Writing factories in haskell might by now seem not very interesting; still
let's write some RPC class definitons in haskell:

-}

data RPC = XMLRPC String | JSONRPC String

data RPCResult = Void | Str String | Strs [String] | Numbers [Int] -- define some results here...
deriving (Show)

callRemote :: RPC -> String -> [String] -> RPCResult
callRemote (XMLRPC conn) function args =
Str ("nyi, xmlrpc to "++ conn ++" func: " ++ function ++ " args: " ++ (show args))
callRemote (JSONRPC conn) function args =
Str ("nyi, jsonrpc to "++ conn ++" func: " ++ function ++ " args: " ++ (show args))


{-

Here is a little testfunction so you know how to use this;
it will call the jsonrpc service "helloworld" at "http://..." with the arguments 1,2

-}

test_json_rpc = callRemote (JSONRPC "http://test.org/service") "helloworld" ["1", "2"]

--
now to a the factory:


rpc_factory :: String -> String -> RPC
rpc_factory impl arg = case impl of
"JSON" -> JSONRPC arg
"XML" -> XMLRPC arg

test_rpc_factory name = callRemote conn "trigger" []
where conn = rpc_factory name "http://alarm.org/sirene"
{-

Call the test_rpc_factory function with either "XML" or "JSON" from the
ghci shell.

A factory also makes sense when one wants to extend the programm at runtime.
This can be done in haskell. Code can be dynamically loaded and recompiled
in running haskell programms (see "hs-plugin"), with type checking and dependency
resolution.

It is possible to create a complete OO-Class infrastructure in haskell,
similar to what exists in python, java and CLOS.

Lesser need for DI and friends



A nice application of factories is dependency injection.
This extends the idea of factories, it allows programmers to wire all class
instance dependencies and configuration values in one function in a declarative way.
In powerfull functional languages this might be pointless in situation where this approach makes
perfect sense in programs written in python or java, as the means of functional abstraction
and combination, especially when used in combination with a powerfull type system, and pattern
matching, render dependency injection frameworks simply useless - if the whole point
of the dependency injection was merely the declarative wireing of class dependencies and
configuration and not the ability to change all that at runtime.
On the other hand, in systems like the erlang runtime complicated mechanisms exist to actually
exchange code and the the dependencies of the code at runtime,
while maintaining the serveral versions of the same code when it is still required,
with automatically restarting services in the correct order when necessary, etc.

Well I hope I have shown, that haskell can actually be fun to work with, and that
diffrent approaches in the design of programming languages are always fun to look at.

That's all folks


The original blog post author compared python with java to show the things he likes in python and
he misses in java.
This comparison has shown how similar phyton and java actually are when compared with haskell ;)
My relation to haskell and java is similar to the blogpost authors relation to python and java,
I think that the author should definitely have a look at haskell, for the
relation to haskell and python might just develop to be of a similar nature,
and the learning experience might also be pretty enlightening.

-}