Getting out of the maze with A star

    A local IT company (who shall remain unnamed in this post and shall be thankful for that) was offering free tickets to this year’s Campus Party event in Mexico. To get the tickets, you needed to complete a programming challenge. Since I’ve never attended any Campus Party* and I enjoyed solving the programming challenge for the Wizeline event, I took some time to solve this one.

    Basically, the challenge was to find the optimal way out of a squared, bi-dimensional maze. Using an API, you registered for the challenge, requested a maze of size n by n, then you were supposed to find the optimal path to get out of it (using a program, of course) and then you had to submit your path as a list of (x,y) positions, starting at (0,0) and finishing at a position where a goal (designated by “x”) was placed. An example of a small 8×8 maze would be something like this:

    The zeroes are obstacles or walls, the ones are clear paths. So the solution in the previous example is obvious: (0,0), (0,1), (0,2), (0,3), (1,3), (2,3), (2,2), (3,2), (4,2), (4,3), (4,4), (5,4), (6,4), (7,4), (7,5), (7,6), (7,7).

    Here is another example:

    Which is still obvious. However, when I stared to request mazes of bigger sizes I noticed the full complexity of the problem: There were many bifurcations and dead-ends. Of course, mazes are supposed to be confusing, that is the whole point of their existence. And not only that, I had to submit the shortest path from start to finish. That meant I could not use a brute-force method to find the goal by walking every possible path in the maze until I found it. The best part was that I had to solve a 1000×1000 maze to claim the prize.

    A 1000×1000 maze might not sound very big, but once you think about all the possible configurations in that space, you realize it is not an easy task. Thankfully, getting out of mazes is a very old problem, pioneered by Cretan kings who wanted to hide away their funny-looking stepsons. For that reason, a lot of smart people have spent a lot of time trying to find the best solution to such a problem, better known as the “shortest path problem”. Among those people was Edsger W. Dijkstra, a dutch mathematician and a computer scientist who rarely used a computer. Dijkstra is one of the elder gods of computer science and now spends his afterlife looking disapprovingly at students who use GOTO statements.

    In 1959, Mr. Dijkstra successfully designed an eponymous algorithm to find the shortest path between two points in any structure where there could be obstacles, varying distances, bifurcations, and dead-ends. This algorithm (or a similar algorithm, at least) is what mapping software uses to recommend trajectories (I believe Google maps uses Contraction hierarchies since they need and can pre-compute routes in order to improve execution times).

    One of these variations of Dijkstra’s algorithm is the A* algorithm (pronounced “A Star”). It was created in 1968 by Peter Hart, Nils Nilsson and Bertram Raphael, all of them Stanford scientists. A*, in turn, has many variations.

    So, I used an implementation of the A* algorithm to successfully find the shortest path in the 1000×1000 maze. I sent my solution and even when the API itself confirmed it was the optimal path to exit the maze, I never got the prize, not even a reply saying “Somebody solved it before you” or “We ran out of prizes”. I found that very unprofessional and irritating since the rules specifically said to send an email to a certain person notifying about the solution.

    Since the Campus Party is now over and I am still a little salty about being ignored, I uploaded my solution to my Github repository. It is a very quick and dirty solution, but it works, so don’t laugh too much (or better yet, improve it and create a pull request). Thankfully, I learned many interesting things and had fun doing this programming exercise so it was not a complete waste of time.

 

* The total number of actual parties I’ve attended tends to zero.

Share

Alan Verdugo / 2017/07/12 / Uncategorized / 0 Comments

Reporting REST data with DataTables and Bootstrap

    Recently, I was requested to make a webpage to report data consumed via JSON to do a demonstration of RESTful architectures. I am not really a web developer so the challenge was interesting, yet easy enough so it wouldn’t take too much time.

    Ultimately, I made a very simple table-based report with a jQuery plugin named dataTables. This plugin has very weird behaviours, but a nice look and feel. It also offers integration with Bootstrap, and together they provide some interesting results. As you can see in the demo, the plugin allows to order by columns and do quick searches, which is nice.

    Of course, I wanted the report to be updated every time the JSON source changed. In other words, the code should be future-proof enough that it could handle when records are added, edited or deleted from the JSON file. This may seem very obvious, but when I was doing research about how to do it, I found weird suggestions, like parsing the REST data in the backend and then basically hardcoding it in the HTML. This is the result.

    This is the code:

This is some data from the JSON data source, just so you can see the format:

This was an interesting little project, it was obviously not made by a professional web developer, so if you have any suggestions, please go ahead and leave a comment.

Share

Alan Verdugo / 2015/09/28 / Uncategorized / 4 Comments

The power of the return code

    The correct handling of return values made my life as a programmer much easier. That is the whole purpose of their creation. I can write code that will check itself and will notice if any step of any program failed or produced an undesired result, and act accordingly. Checking the return values allows a programmer to write incredibly robust programs.

    I wanted to write a post about this, thinking in rookie programmers. I tried to make it as useful and easy to understand as possible.

    echo is a UNIX/Linux command that will show a message in the screen. $? is a special variable that stores the return code of the last ran command. Together they are a powerful tool used for programming and systems administration.

    A program should always return a value. Any coding standard should have these lines in it:

A successful program should return 0 upon correct completion.

An unsuccessful program should return a non-zero value.

    For example, let’s write a shell script that creates an empty file named “file.txt” in /home/user1. The most basic version of this script would be something like this:

    That’s it, just one line. Simple, uh? Well, maybe it is too simple. What if there is no free space in the file system? What if the user running the command does not have writing permission on that path? What if that path does not even exists? What if there is already a file there with the same name? Should we overwrite it?

    The following is a shell script that is a little more robust (but still very simple). It completes the same task but it checks for more possible failure scenarios:

    The “exit” command means to abort the program and return to the OS with the integer that follows the “exit” keyword. You may notice that the only successful scenario ends the program with “exit 0” and all the other (failed) scenarios end with “exit” and a positive integer. So, after the program ends, the programmer or sysadmin can easily check if the program succeeded or not. For example, something like this could happen:

     In the example above, the script failed and it showed us a pretty message explaining why. It also returned the value 4. If our script is called by another script, that caller script can check easily if a non-zero value is returned (i.e. if the script failed). The following is an example of a successful execution:

    If you play with the “stock” programs in a default UNIX/Linux installation, you can see this behaviour as well. For example:

     Wait! Why did the “find” command returned a zero?! It did not find the file we asked it to find! The reason is because the execution of the “find” program completed successfully. The fact that it was unable to find a file named “helloWorld.sh” is another matter entirely. In other words, this means the “find” program is telling us that it is functioning properly, it is not its fault that the file you try to find does not exist there. A nice thing to have in mind– Computers will do what you tell them to do, not always what you wish they will do.

     Code like this makes our programs more robust, and can help us to avoid catastrophic failures and emergency calls late at night. Different programming languages and paradigms offer different ways to accomplish this same idea. The “try-catch” statements are good examples of this.

    The truth is, you can write a program as complicated as you want. There are many ways to improve the previous script. You need to know when it is “sufficiently completed” and stop there. We could continue on and on with our little script and add fancy functionality like sending a failure report via email, creating the file in an alternative path if the first path does not allow it for some reason, etc. But let’s stop there for now.

Share

Alan Verdugo / 2015/07/27 / Uncategorized / 0 Comments

Advantages of version control systems for agile software development.

    It is baffling that some teams and companies still do not know how to use version control systems, but it is even more baffling to see that some development teams do know about version control systems but decide not to use them. For example, some prefer to store source code on each of the team members’ workstations and then join the code and work from there. The problems that this “approach” brings are many and very grave. For example, one developer could use code dependencies and libraries that are not needed by the rest of the team, or have a disparity of versions of said dependencies.

    I have seen such basic and easily avoidable problems like a team member writing her part of the project in one programming language and the rest of the team writing their parts in another language. These kind of mistakes are not even acceptable from students.

    The version control systems were created by developers for developers because they came to the conclusion that the source code (their main and most important deliverable) needs to be properly stored and organized. Trough the years, many different VCS have been created, with different characteristics between them. This proves that most developers know the importance of VCS, since they are willing to spend time tweaking and enhancing them.

    For those developers that still do not believe in how useful VCS are, here is a list of their advantages:

  1. Accessibility. By having the code in a shared server, it is easy for the development team to synchronize and update the code independently of the geographic location of the team.
  2. Backups. As the name implies, VCS are very useful for permanent storage of all the versions of each file, including current and previous versions.
  3. Collaboration. The team can collaborate by sharing files (see the first point), but control version systems can also help to easily conduct code reviews, approvals, revisions, updates and rollbacks.
  4. Troubleshooting. By having quick and easy access to previous versions of the source code, it is much easier to identify when a change caused problems, who caused it, and exactly when and how the problem was caused. This is very useful for post-mortem investigations in production environments, as well as for the developers’ learning process.
  5. Flexibility. VCS are not only used for handling code, they are also used by non-developers. I have seen examples of this by technical writers, who use a VCS for collaboration in the writing, preparation and edition of documents and books. People who support applications use it for storage and handling of tutorials, memorandums, guides and even binary files. In the near future, VCS will be used by other disciplines like architecture, medicine, and graphic design.
  6. Popularity. Every company that handles software seriously is using some kind of VCS. Even companies that only write software for maintaining another kind of operations and have a revenue stream other than software, have seen the advantages that having a good control of their source code means. This is an advantage for the developers and IT professionals that pursue working for them. Experience and knowledge in VCS can be a decisive advantage in interviews with this kind of companies.
Share

Alan Verdugo / 2013/04/11 / Uncategorized / 0 Comments

Review de “Gray hat hacking”

Título: Gray hat hacking: The ethical hacker’s handbook (3rd edition).

Número de páginas: 720.

Editorial: McGraw-Hill.

Autor(es): Allen Harper, Shon Harris, Jonathan Ness, Chris Eagle, Gideon Lenkey, Terron Williams.

Idioma: Inglés.

Fecha de publicación: 06 de Enero del 2011.

ISBN-10: 0071742557.

ISBN-13: 978-0071742559.

Portada:

gray-hat-hacking-cover

 

    El “White hat hacking” son las prácticas primordialmente defensivas y éticas de la seguridad informática, y generalmente se le conoce a un White hat hacker como un experto en seguridad que ha sido contratado por una empresa para defenderse de ataques cibernéticos. Lo que nos lleva al Black hat hacking, que son las prácticas ofensivas y generalmente hechas con alevosía, intentando ganar provecho de exploits y vulnerabilidades para el beneficio propio. Los practicantes del Black hat hacking obviamente prefieren permanecer anónimos y, en esencia, hacen cosas que no van muy de acuerdo con la ley. Este libro hace un intento de combinar las técnicas del White hat hacking con el Black hat hacking.

    Es muy cierto el hecho que un buen White hat hacker necesita conocer las técnicas con las cuales será atacado. También es cierto que muchas empresas tratan de tomar ventaja de hackers curiosos y aburridos que descubren vulnerabilidades en los sistemas de dichas compañías. Por ejemplo, Google y Facebook tienen programas públicos de recompensas que pagan bastante bien por encontrar bugs, vulnerabilidades y exploits que puedan poner en peligro a la compañía o a los usuarios. A estos hackers, que no son empleados oficiales de la compañía pero de cierta manera trabajan para ella se les conoce como Gray hats. Para este tipo de hackers fue que se escribió este libro.

    “Gray hat hacking” fue escrito por un grupo de autores, y mientras uno lee los diversos capítulos esto es muy notorio. Es muy fácil darse cuenta de cuándo el cambio de autores sucede a lo largo del libro debido a cambios muy notables en el estilo de escritura, la temática, etc. Mientras unos autores te llevan de la mano muy claramente, otros asumen que hay un previo y basto conocimiento del tema a tratar, lo cual crea una variación entre la accesibilidad del libro, pero a la vez crea cierto balance, tal vez inesperadamente.

    El libro incluye secciones muy variadas, desde capítulos enteros dedicados a Metasploit hasta una muy básica pero igualmente bienvenida sección de locksmithing, pasando por cuestiones meramente legales, shellcode, ingeniería social, inyección SQL, ingeniería inversa, entre muchas otras. En general, cada capítulo toca un tema diferente de la seguridad, cada uno de los cuales podría extenderse hasta comprender un libro por si mismo. Esta es la fortaleza y la debilidad del libro, pues sirve como una introducción general al hacking, pero si uno necesita centrarse en un tema en específico, tendrá que buscar en otro lado.

    Al final de cada sección, los autores incluyeron una lista de referencias lo cual es sumamente útil para los lectores que quieran profundizar en alguno de los temas. Desgraciadamente la naturaleza de los libros impresos no permite actualizar las referencias, y ya hay algunas que no están disponibles. Esto no le resta puntos al libro, ya que esto está totalmente fuera del control de los autores y es inevitable en cualquier publicación.

    Definitivamente recomiendo este libro a cualquiera que esté interesado en la seguridad informática. Sin embargo, la variedad de temas harán que los novatos no entiendan muchas secciones. Desgraciada y afortunadamente la seguridad informática es un campo demasiado extenso y profundo que se actualiza, literalmente, día con día. Por esto es que uno necesita bastante experiencia en ciertos temas para entender totalmente el contenido del libro.

Share

Alan Verdugo / 2013/02/13 / Review / 1 Comment