Sunday, April 7, 2019

The Hitchhiker's Guide To The Galaxy

The Hitchhiker's Guide To The Galaxy

Reading this comedy and science-fiction comedy from Douglas Adams was actually pretty funny and entertaining. Full of geeky references, I think the author is a genius, as he perfectly balances scifi situations like spaceships, aliens, robots and planets with real life annoying situations and problems, like capitalism, depression, anxiety etc. I am personally not a fan of books, I am more of a tv series guy, but curiously, reading this book felt pretty much like watching classic scifi cartoon series like Rick & Morty or Futurama.

The book tells the weird story of Arthur Dent, a perfectly normal and average human being, whose live changes forever as he finds out that his friend, perfect Ford, is an alien and escapes with him into deep space as Earth is destroyed because it was scheduled to disappear. They, with other characters, end up in Magrathea, which is legendary planet which is known for having a planet-building industry. In this planet they meet Slartibartfast, who tells them the story of how a hyper-intelligent beings built a super computer named Deep Thought to calculate the answer to the ultimate question of life. The answer ends up being 42, which is a complete disappointment because it means nothing.

I like how everything that happens in the book is completely unexpected and how the imagination may unfold in very mysterious ways. Such ways may not always seem the traditional way or even the rational way but, who cares?

Friday, March 22, 2019

Technical Overview of the CLR

Technical Overview of the CLR

On their article, "Technical Overview of the CLR", authors Erik Meiker and Jim Miller compare the Common Language Runtime functions and characteristics with the JVM. First they talk about what is an intermediate language and its motivations, among which they mention portability (so there are less translators), compactness (intermediate code is usually more compact than the original one), efficiency (better use of the environment), security (it's easier to constraint security), interoperability (fundamental for software resuability) and flexibility (typesafe metaprogramming concepts).

In general, they debate how each of these factors are managed differently in CLR and in JVM, and demonstrate how CLR in general is more efficient and better than the other one. For instance, JVM doesn't implement type-unsafe featuresm like pointers and unsafe type conversion, while CLR does implement it, as it works with primitive and composite types. In bit storage, JVM is limited to 32-bit wide memory allocation, which makes bigger types like long or long long to recquire more memory blocks in order to use them. In the other side, CLR implements overflow during operations on data types, which is pretty useful in arithmetic calculations and processes.

CLI does not limit the length of branches to 64K, which is a problem for compilers generating JVM byte code. It uses enums and structs, while JVM needs to transform them to classes and class-hierarchies. This is a big advantage as we know because data oriented models are usually faster and more efficient in memory. CLM supports tailcalls as well, which allow recursion functions and models to be implemented to strictly repetitive structured languages as Haskell and Mercury.
In conclusion, CLI is more powerful and JVM, and Microsoft along with the .NET Framework developers are constantly working on it to develop more complex functions and make it more compatible and flexible with other programming languages.

Sunday, March 17, 2019

Building Server-Side Web Language Processors

Building Server-Side Web Language Processors

After reading the article titled “Building Server Side Web Language Processors” written by our course professor, Ariel Ortiz, I am very intrigued by the ideas proposed by our teacher in the constant search for further evolving this course not only in our institution, but worldwide, by introducing the idea of web servers running a compiler, making it more accessible for students and giving further more context to students in the importance and utility in compilers design. The author explains the possibility of using Server-Side implementations for language processors, or more so compilers. This is due to the obvious fact that the Internet is the trend which about every technology revolves around.

So far in the semester, our practices and compilers are all run in a local pc repository. This article explains the idea of implementing a compiler running on a web server, as I previously mentioned. This would obviously recquire of a more complete and complex software development experience, since we would have to integrate different kinds of technology that we have learned in previous courses, like web and app development, and architecture and design of software. However I do think it would be a challenge to learn all that content in 4 months time, however I would be down to try it. I do think that web development is one of the best paid branches in the career, and it is also one of the most demanded ones. It is a great market, and the more tools and different experiences we can gather in the subject, the better. I do think of this as a good proposal for a topic subject or even an integrator project to make, as it involves and includes knowledge of all the career.

I do hope the next generations of my career get the opportunity to put their knowledge to test with a project like this.


Sunday, March 10, 2019

Ruby and the Interpreter Pattern

Ruby and the Interpreter Pattern

After reading the technical article “Language Design and Implementation using Ruby and the Interpreter Pattern”, written by our teacher Ariel Ortiz, I can't help but feel sad because I couldn't have the chance to use the SIF tool he describes in his article in my subject of Programming Languages. The article talks about an implementation he made to evaluate different LISP expressions that are given as strings using a framework called S-Expression Interpreter Framework, and gives some examples with Ruby code to demonstrate how it works.
The principle for the interpreter pattern is that it's easier to solve some problems by creating a specialized language to understand it, and then express the results in that same language. This is all involved in both the syntactic, but more importantly the semantic analysis of compilers (the phase we are currently working in in our project). It involves a data structure that is called Abstract Syntax Tree (AST Tree), where the different operands and hierarchy of functions are organized in order to create a logic and order to the thinking and executing processes of the machine.

After the creation of the AST Tree, the framework reads a source which will be turned into a string, containing all the info of the tree. The regex API is then used to scan the string which will turn the S-Expression into their equivalent values in Ruby. Finally, the AST will be built. 
I would have liked pretty much to have the opportunity to implement and mess around with the framework in my Programming Languages course, as I mentioned at the beginning of this post. I guess it would have been interesting and could have given more context on symbol expressions and their uses. However, I do think the content of this article will be very helpful for the next phase of development for our project, and the course itself.

Sunday, February 17, 2019

Mother of Compilers

Mother of Compilers



As I finished reading the article and a brief documentary about her life and work, I can only say I'm genuinely surprised about all the contributions that Dr. Grace Brewster Murray Hopper gave to computer science. I had no idea who she was nor what she did in her long career.

Reading through the article I the only thing that wasn't surprising to me was the fact that she was originally a mathemathician rather than a programmer. I found this pretty common because if there is one thing all the fathers and mothers of current computing have in common is that they all have crazy logical and mathematical abilities. So yeah, I guessed since the beginning she surely was a mathematician.

However from there on, the article and the documentary were one surprise after another. I found quite funny the entry on her diary about finding the first "actual" bug in a program, refering to a moth that got fried in the computer's hardware. I cannot imagine how she had the ability to create the first compiler out of a machine that was basically arithmetic operations rather than logic-driven language.

But the thing I admired the most of her biography was the determination and passion for doing what she loved that she had and shared with everyone. Despite being "too old" to stay in the Navy at 40 years old, she proved to be too valuable of an asset to cut loose, and was recluted back at 60 years old, and apart from all of this, she kept teaching people in her spare time about programming. I envy all those fortunates who had the opportunity to go to one of her many lectures and learn directly from her. She was, without a doubt, a woman beyond her era, and an invaluable piece of computer and science history.

Monday, February 11, 2019

Internals of GCC

Internals Of GCC

In the podcast from the Software Engineering Radio, with Morgan Deters as guest, the team discuss the characteristics and functions of the GNU Project compiler known as GCC.

At the beginning I found the recording a little tedious, however as it continued I gained more interest in the subject. I learned a lot of things I never assumed with GCC, besides the fact I've used it for many projects and competitions in C. For instance, I thought it was compatible only for C programs, but I found out that it also works for C++, Java, and some other languages. 

On the other hand, it was very interesting to learn about the compiler working process: it  goes through three different phases in which source code is transformed into target code for the computer to understand it and execute it correctly.

The first phase is the frontend where a translator of the input (the source) produces an output for the middle end, which is going to use this output as its input. The front end has to parse its input into an abstract syntax tree form to create a representation of what's in the source file and send the data as it should.

Then comes the middle
 end where the output generated by the tree in the front end is used to produce an equivalent and more efficient low level representation of the input tree.

Finally, we have the backend, that takes the output of the middle end as its input to produce code. The code is optimized by replacing certain blocks and then is finally translated to Assembly Language.

Learning about this entire process was quite intersting and enlightning. I can't help but think the complexity envolved in GCC in order for it to be compatible with so many languages and platforms.


Monday, February 4, 2019

The Hundred Year Language

The Hundred Year Language

Technology is always evolving. We are continuously developing new software and hardware, with the goal of making it each time faster and more efficient. On his speech during PyCon 2003, Paul Graham talks about the future of programming languages.

I have never give it the thought of the future of programming languages. It is true that each 18 months technology's speed doubles, as the Moore's law state. However I found it reasonable that this phenomenom cannot keep on forever. There must be a natural limit to this growth, as Paul said. And as technology gets faster, that means that languages can become more simple to use to program. As they become more powerful, programmers each time have a smaller need to compact everything and use complex algorithms to make more efficient code for a computer to be able to process it given its current limits of memory and cpu power. 

 As I read through this essay, my most important thoughts were on imagining whether if optimization algorithms would play such an important role in the future of programming, or must it be the languages themselves the ones that, as the author suggested, will have to adapt and evolve, surviving and merging with other languages in the search to create a more complex and compatible one that will be able to work in the advanced and faster computers in the future.

It is amazing to think that probably we already have developed the "hundred year language", a language that is adaptable enough to adopt elements from other languages to keep active. As other languages will very likely dissappear (like Java), I do agree with the principle that we should focus on analyzing the properties of our modern programming languages in order to prevail the major ones.
One thing is for sure, I believe parallel programming is going to be a key in the future of programmig, and it will allow us to reach new levels of speed and efficiency in a hundred years.

The Hitchhiker's Guide To The Galaxy

The Hitchhiker's Guide To The Galaxy Reading this comedy and science-fiction comedy from Douglas Adams was actually pretty funny and ...