IBM Interconnect 2017

    I had the opportunity to attend IBM Interconnect 2017. This was my first time attending a serious technical conference and I must say that it was an excellent experience for many reasons.

    The reason I was sent to Interconnect in the first place was to show a proof of concept of a project I had been working on for the past few months. I might write about that project in a future post but in this post I will focus on the overall Interconnect experience and the things I learned during the event.

    As usual, the event took place in Las Vegas, more specifically, in the Mandalay Bay convention center. There was another event called IBM Amplify at the MGM Grand, but I did not have the time to attend (which is quite ironic since I was staying right there, at the MGM hotel).

    I had access to the venue since day zero, which means I got to see how everything was put together. As a matter of fact, I was extremely disappointed during the first day because I got to the convention center and I only saw a gigantic warehouse with many boxes in it. I was impressed by the ridiculous size of the convention center, but I was much more impressed when I saw how, in just a matter of hours, this almost-empty warehouse was converted into an actual technology conference, complete with booths, places to eat, enormous screens, a couple of mainframes and a few humanoid robots. That, and thousands of humans beings.

    I did have some free time every now and then, which I used to get lost in the crowd, grab some swag, speak with random interesting people and attend a few sessions. However, I missed some of the sessions that I wanted to attend, like Ginny Rometty’s keynote and Will Smith’s interview. Thankfully, replays are available at IBMGO and/or Youtube. The following are some of the things that I liked the most.

Mainframes

    IBM had on display a couple of their legendary Mainframes. They look impressive, mystifying and gigantic by today’s standards. A very friendly representative approached me and explained the advantages of the mainframes, which, according her, are far from dead. She gave me a quick tour of the components, mentioned that mainframe’s main focus on reliability and compatibility, that most Fortune 500 companies still rely on mainframes, and they are way more affordable than I expected. It was quite an eye-opening experience, learning that such old technology is still alive and well.

Recorded future

    To be honest, I had no idea what Recorded future was. I only got close to this booth to get a figurine of their super-cute mascot, Marty the Martian. However, once I was on their booth, Alex (whose last name I don’t remember) approached me and explained what Recorded future does: They basically crawl the web looking for intelligence about security treats, then, they use machine learning and all sorts of algorithms to warn their clients about vulnerabilities and exploits that could affect them in the near future. They are basically taking a proactive approach to IT security using analytics, which I think is a great idea.

Analytics

    I spend some time talking with the IBM analytics teams, they where very friendly and answered all of my novice questions. In fact, they provided very useful recommendations about Watson Analytics and Data Science Experience.

Ubuntu

    This was actually Canonical’s booth, but everything was branded with the Ubuntu logo. Ivan Dobos, a solutions architect, kindly explained to me how Juju works and its use cases. I was very impressed by Juju’s capabilities and it is something that I will definitively explore in the near future.

Phone chargers

    There were a couple of lockers where attendees could lock and charge their phones. A brilliant and very simple idea. Of course, this is not cutting-edge technology, but it was smart, useful and easy to use, which are three characteristics that are often forgotten while designing solutions to problems.

Bluemix server challenge

    Somewhere in a IBM office, somebody was faced with a critical problem: “How can we make videogames even more nerdy?” the answer is Bluemix Server Challenge, a VR game where you take the place of a heroic data center admin and pickup hardware which needs to be correctly placed into a rack. I did not have time to play it, but everybody absolutely loved it.

Conclusions

    During my days at the conference, I heard so many languages and saw so many faces. Technology truly is one of the few things with the ability to bring people together regardless of nationality, language or any preconceived “differences”. I was often reminded of those lines in the Hacker’s manifesto

This is it… this is where I belong…
I know everyone here… even if I’ve never met them, never talked to them, may never hear from them again… I know you all…

    I now understand that conferences of this size are better used as intelligence gathering points, where decision makers, innovators, thought leaders and futurists can get a first-hand idea of the technological trends that will inevitably influence the directions of the other industries in the following years. Even better, all this people can interact to generate more ideas.

    I hope I have another chance to attend Interconnect (or any other tech conference) again. More importantly, I hope I can continue attending while being paid for it. However, while the tickets could be seen as expensive, I am convinced that these conferences are invaluable if you take the time to attend labs, sessions and just try to engage in conversations with random people, after all, smart people from all over the world travel to attend and you never know who may be listening to your ideas, or whose ideas you could listen to.

Share

Alan Verdugo / 2017/04/19 / Uncategorized / 0 Comments

Massively modifying images with ImageMagick

    Web editors often have the need to edit a large number of images. For example, the large image size of professional cameras tends to be overkill for most sites. I wanted a quick and easy way to resize large images and that was how I found ImageMagick.

    ImageMagick is a suite of tools and according to the man page, we can “use it to convert between image formats as  well  as  resize  an image, blur, crop, despeckle, dither, draw on, flip, join, re-sample, and much more”. First, let’s install imagemagick:

    Then, we can use the convert command to do the actual edition. Check the man page to see the astounding number of options this command has. For example, if I want to resize all the JPG images in the current directory to a width of 1280 pixels and save the resulting images as the same name but with “min-” before the name I would execute the following command:

    And here lies the advantage of ImageMagick: it can be used in a script to edit images extremely quickly. ImageMagick can also be used on Mac OS X and Windows. For more information about the convert command, refer to http://www.imagemagick.org/script/convert.php

Share

Alan Verdugo / 2016/12/12 / Uncategorized / 0 Comments

Broken WordPress after Ubuntu 16.04 upgrade

    After some delays, I finally upgraded the server’s OS to the LTS Ubuntu 16.04. At first I thought that everything went fine, but then I tried to access the blog and it did not work, it only showed a blank page. A very bad omen. Then, when I tried to login into WordPress, this horrible message appeared:

    The message was actually much longer, I am just posting the beginning. If you have suffered with PHP in the past (like me), you will notice that this uninterpreted PHP code. That was my first clue, something was wrong with PHP. I created the infamous test.php page to test if PHP is actually working correctly with Apache. For those of you who haven’t done this, it basically is a “hello world” approach to see if PHP is working correctly. We paste the following code into a file named test.php or pleasework.php or something like that.

    Then we move that file to the Apache public directory (/var/www/html, is the default in Ubuntu) and grant it appropriate permissions. Then we go to yourdomain.com/test.php and, if PHP is working, we should see a page with PHP’s logo and all sorts of information like System, Server API and many more. In my case, I only got another blank page. This meant that something was very wrong with PHP.

    So I went into the server via SSH and executed php -v. Turns out I didn’t even have the php command. How was that possible? Well, turns out PHP5 is no longer the default in Ubuntu 16.04, instead, PHP7 is the default. At some point during the upgrade, PHP was completely uninstalled. So, let’s install it again:

Then install libapache2-mod-php7.0:

Then install php7.0-mbstring:

Then install php7.0-mysql:

Finally, reload Apache’s configuration:

    Once all that was done, I reloaded the test.php page and it gave me all the information I mentioned before. I also logged in successfully into WordPress. Now I am wondering if I should change the OS to something else than Ubuntu, and if I should change the WordPress theme. There are other problems that need to be solved, but for now WordPress is working as it should and I am happy.

Share

Alan Verdugo / 2016/12/04 / Uncategorized / 1 Comment

Comptia Linux+ certification

    I recently completed the Comptia Linux+ certification. I spent much more time than I previously hoped on this, and because of that, I wanted to write about it. After all, this was the reason why I did not update this blog as frequently as I wanted.

    First of all, let me tell you about the basic stuff. I chose this particular certification as my first one because I am very interested in Linux and everything that is related to Open Source. Also, this particular certification has a 3-for-1 offer. This means that if you complete the certification requirements, you will not only get the Comptia Linux+ certification, you will also get LPIC-1 and the SUSE CLA certification. Alas, after September 1st, 2016, SUSE decided to stop participating on this offer, so now it is actually a 2-for-1 offer, which is still pretty good in my opinion.

    In order to get the certification, you need to pass two exams: LX0-103 and LX0-104. Currently, an opportunity to take each test has a price of $194 US dollars. Each exam consist of 60 questions that you can answer in a 90-minute period. In order to pass an exam, you need a minimum of 500 points (on a scale of 200 to 800). I am still not sure how the questions are graded

Preparing for the exams.

    The only material I used for studying was the “Comptia Linux+ Powered by Linux Professional Institute Study Guide: Exam LX0-103 and Exam LX0-104 3rd edition” book. Its name alone should tell you how long and boring it is to read (like most technical books). However, it is the tool that allowed me to be certified, so it does deliver what it promises and I would recommend it. The book also includes a discount code for the exams and access to a website where you can study using flashcards and a test exam.

    I admit I did not study frequently, there were days when I read the book for a couple of hours, then I did not read it until weeks later because I just did not have the time. I know for a fact that proper discipline and regular study schedules while reading this same book will result in better grades on the exams. However, I read the book three times from beginning to end. It was boring, painful, and I just got sick of reading the same thing over and over again (I committed to not read any other book until I got the certification), but it was worth it in the end.

Taking the exams.

    Once you paid and scheduled your exam, you just need to go to the PearsonVue center you selected. You only need to take a couple of official IDs with you. The lady that helped me was very kind and made sure to explain the whole process clearly. She asked me for my IDs, verified that my signature and picture matched and then took another picture of me. All this is just to ensure that nobody else is able to take the exam and claim it was you. So, if you were thinking in asking a friend to go and take your certification test for you, it will simply not work. Security is very thigh and I think that is good. I was given a set of rules and told to agree on them. The rules basically say that you will not cheat and will not help other people cheat (which is practically impossible anyway).

    After that, I was given a key and told to put all my things in a drawer. You are not allowed to sit in front of your computer with your cellphone, jacket, keys, notebooks, or anything else that could be used to cheat. I was given a marker and a small whiteboard, which I was supposed to use as a notebook if needed.

    As for the actual questions, some of them are multiple choice, some are what I like to call multiple-multiple-choice (“choose the 3 correct answers from 5 options”) and in some questions you have to actually type the answer on a text box. I think 90 minutes is much more time than it is actually needed for 60 questions since you will know the answer right away or not know it, in both cases you maybe need a couple of seconds for each question. I used my extra time to re-read and think about the answer I chose, since some of the questions can be very tricky.

    Once you finish the exam, you are given your grade, so you know right away if you passed or not. The only “feedback” you receive are the exam objectives you failed. You never know which questions you answered incorrectly or why. If you failed on an answer related to network routing (for example), in your results sheet you will see a message saying that “network routing” is one of the exam objectives you failed. And that’s it. Of course, this is done to further ensure that you do not spread information about the questions or answers after you took the exam.

Lessons learned.

    I spent several months studying for the exams. Actually, I spent so much time studying for this, that the original exams (LX0-101 and LX0-102) were updated to new versions, which made me start studying again using new study materials because the exams’ objectives were also updated. In the future I will try to complete certifications faster to avoid this. The SUSE CLA certification offer was removed just after I scheduled my second exam, but before I actually took it, so I lost that opportunity as well just because I wanted more time to study. This is just another example of how quickly technology advances, you can literally see how some projects are outdated in a matter of days. If you want to stay current, you need to move fast, and this is something not a lot people can or want to do.

    Would I do this again? Yes, I would. Maybe not this year or even next, but I think certifications are valuable, not just because of the title in your CV, but because it shows that you are willing to undertake a challenge, prepare for it, and actually achieve it, while learning new tricks during the process. Maybe Comptia Linux+ and LPIC-1 are not as famous as the certifications from RedHat, and I was able to pass both exams in my first try, but they were much harder than I expected, and because of that I think they should be taken more seriously among employers and recruiters. I considered myself an advanced Linux user with professional experience as a system administrator, but I was still able, and required, to learn many new things in order to get the certification, for this fact alone I think it is worth it.

Share

Alan Verdugo / 2016/09/21 / Uncategorized / 0 Comments

Web syndication: The most useful thing nobody uses.

    In Alan Moore’s seminal comicbook Watchmen, the retired superhero Ozymandias (considered the smartest man in the planet), now an extremely successful businessman, is seen watching a wall of television sets, each set to a different channel. He does this in order to absorb as much information as possible in a reduced amount of time. He uses that information and his intellect to help his businesses grow.

    Watchmen is set during the Cold War era. At that time, television was the best massive communication instrument, so Ozymandias was right in using it. However, we now live in the 21st century, and our broadcast instruments (both physical and logical) are vastly superior to a wall of TVs. However, it is evident that not many of us are using them to their full potential, even when we are proven everyday that information is indeed power. We are relinquishing some of that power every time we use an unorganized mechanism of information consumption.

    In a world that is more connected than ever, we are surrounded by constant updates for an increasing number of sources. We are being pressured into keeping up to date to an ever increasing amount of information, yet we have an ever decreasing amount of time to do it. Web syndication mechanisms have arisen trying to solve problems like this one, and they are actually very effective in doing it. However, their usage has been relatively low and it keeps declining.

    The most popular Web Syndication mechanism is RSS[1]. Reading a RSS feed is like reading the front page of a very organized newspaper. A newspaper that updates itself every minute.

    As an avid user of RSS feeds, I can start my browser and be updated in all the important news in just a couple of minutes. More importantly, I can do that again every few hours to know if something new has happened, and doing it will only take me another minute. This is a great help in my productivity since I can still be aware of everything that is happening but it just requires a fraction of the time it used to take before I started using feeds. Before using feeds, I browsed trough the news pages aimlessly, wasting hundreds of hours every month. Now, I still get the same content, but I have reduced the time I need to do so.

    There was some controversy when Firefox 4 launched and suddenly the RSS button was removed from the default layout. According to Mozilla, the reason behind that decision was that only 3% of the users actually clicked on it[2], which is one of the lowest usage rates of the main UI elements in the whole browser. This proves the point, Web Syndication is one of the most useful mechanisms on the Internet, but only a waning minority is taking advantage of it.

Based on over 117,000 Windows 7 and Vista Test Pilot submissions from 7 days in July 2010.

This heatmap shows the usage rate of the RSS button in Firefox. Source: https://heatmap.mozillalabs.com/

    Google Chrome plainly offers no native support for feeds, and the installation of an extension is necessary in order to use them. In 2013, Google discontinued Google Reader, the most popular RSS client at the time. Twitter stopped supporting RSS in 2013. Apple removed support for RSS in Safari and Mail when OS X Mountain Lion was launched in July of 2012. From then on, the users are directed to the Mac App Store, where they can buy an RSS reader[3]. Worst of all, the number of web designers that add a RSS button to their designs is decreasing, as is the number of back-end programmers that actually implement a RSS feed in the page to begin with.

    No wonder the usage of Web syndication is declining. It was low to begin with, and the tech giants are making it harder for people to use it or even to know that it exists.

    But why have all these companies forsaken RSS? After all, RSS is a very useful feature that is easy to implement, and it does not require many resources. In the case of Google Reader, it was said that the product usage declined, which is a similar case to the removal of the RSS button on Firefox. However, Apple went a step further and removed a feature that was already working and caused many problems for their users in the process. RSS in OS X was not broken, but Apple decided to “fix it” by simply removing it. In other words, RSS was not hurting anybody, but they decided it was its time to go.

RSS usage statistics. Source: http://trends.builtwith.com/feeds/RSS

RSS usage statistics. Source: http://trends.builtwith.com/feeds/RSS

    Without RSS, I would have to go to my news pages and look for the news. I have noticed that this is very distracting. With RSS, the news come to me, and they are waiting there to be read. I can choose when I want to visit a site in order to read further. I can preview the content of a site and decide beforehand if I really want visit it.

    And this brings the question: are RSS/Atom feeds bad for a website? After all, if I have to actually go to a site in order to read the headlines, it is much more likely that I will read more articles, click on banners and spend more time browsing the site, which directly or indirectly increases the site’s income. In this sense, for a webmaster, feed reading is like window-shopping, when the user could instead enter the store and be subjected to a much more complete marketing experience. This seems to be the case. After all, Facebook and Twitter both have reduced or completely removed support for RSS feeds. They obviously prefer their users to spend more time in their advertisements-plagued main sites instead of just getting plain-text updates via a feed. Window-shopping is bad business, while a complete “shopping” experience has proven to be much more profitable. Just ask Starbucks.

References:

  1. http://trends.builtwith.com/feeds
  2. https://heatmap.mozillalabs.com/
  3. https://en.wikipedia.org/wiki/OS_X_Mountain_Lion#Dropped_and_changed_features
Share

Alan Verdugo / 2016/06/19 / Uncategorized / 0 Comments

1 2 3 5