Talk @ Spark Summit 2015: Connecting python to the Spark Ecosystem

I was lucky enough to go San Francisco in June to give a talk at Spark Summit 2016. It was my first time in San Francisco and Silicon Valley I have to say it is a unique place.

My talk at Spark Summit covered a variety of topics around spark, python, python libraries, the deployment, use cases and a little bit about alternatives and the future.

Below you can find the video of the presentation and slides.

PD: There is a one good joke/story at 9:24

ReproduceIt: Name Trends

ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code.

Today as small return for the ReproduceIt series I try to reproduce a simple but nice data analysis and webapp that braid.io did called Most Beyonces are 14 years old and most Kanyes are about 11.

The article analyses the trend of names of some music artits (Beyonce, Kanye and Madona) in the US, it also has some nice possible explanations for the ups and downs in time, its a quick read. The data is based on Social Security Office and can be downloaded from the SSN website: Beyond the Top 1000 Names

The data is very small and loading it into pandas and plotting using bokeh it was very easy.

Talk @ PyData NYC 2015: Querying 1.6 billion reddit comments with python

I had the luck to go to beautiful NYC in the fall to give a talk at PyData NYC 2015.

The talk was about how to query around 1.6 billion reddit comments with python tools while leveraging some big data tools like Impala and Hive.

Some of the content can be found in the continuum developer blog

Below you can find the video of the presentation and slides.

PD: There is a couple of good jokes at 35:05 - If you like bad jokes

Multicorn in Docker + conda for Postgres Foreign Data Wrappers in Python

Multicorn is (in my opinion) one of those hidden gems in the python community. It is basically a wrapper for Postgres Foreign data wrappers and it makes it really easy to develop one in python. What that means is that it allows to use what is probably the most common and used database right now, Postgres, as a frontend for sql queries while allowing to use different data storage and even computation.

Unfortunately its not really known and therefore used, the only real example I have been impress with is a talk by Ville Tuulos: How to Build a SQL-based Data Warehouse for 100+ Billion Rows in Python where he talks about how AdRoll "built a custom, high-performance data warehouse in Python which can handle hundreds of billions of data points with sub-minute latency on a small cluster of servers".

That talk is a only a year old but to ...

Crawling with Python, Selenium and Docker

TD;DR: Using selenium inside a docker container to crawl webistes that need javascript or user interaction + a cluster of those using docker swarm.

While simple HTTP requests are good enough 90% to get the data you want from a website I am always looking for better ways to optimize my crawlers specially in websites that require javascript and user interaction, a login or a click in the right place sometimes give you the access you need. I am looking at to you government websites!

Recently I have seen more solutions to some of these problems in python such as Splash from ScrapingHub that is basically a QT browser with an scriptable API. I havent tried it and it definetly looks like a viable option but if I am going to render a webpage I want to do it in a "real" (Chrome) browser.

An easy way to use Chrome (or Firefox or any other popular browser) with an scriptable and multi-language API is using Selenium

ReproduceIt: Reddit word count

ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code.

On this small article I reproduce the results (not visualization this time) from the reddit user /u/fhoffa, he posted a nice word cloud visualization of the most common words on some famous subreddits.

The reddit post can be found in the data is beautiful subreddit: Reddit most common words for /r/politics, /r/movies, /r/trees, /r/science and the original word cloud was this: Original Wordcloud

For some context he mentioned that he used Google BigQuery and Tableau. The data used was the most recent month available (May 2015) in a recent reddit dump that user /u/Stuck_In_the_Matrix made available in a nice torrent.

To reproduce the results I am using dask which is a nice new project from Continuum Analytics which got a lot of attention in the most recent SciPy. A little disclaimer here: I currently work for Continuum but this post is not sponsored in any way.

ReproduceIt: FiveThirtyEight - How Baltimore’s Young Black Men Are Boxed In

ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code. All the code and data is available on github: reproduceit-538-baltimore-black-income. This post contains a more verbose version of the content that will probably get outdated while the github version could be updated including fixes.

For this second article I (again) took an article from one of my favorite websites FiveThirtyEight. On this case I took an article by Ben Casselman "How Baltimore’s Young Black Men Are Boxed In" which I found interesting given the recent events in the US, specially on this case Baltimore, Maryland.

The article analyses the income gap between white and black people in different cities all around the US. The data source is the "American Community Survey" and is available in the "American Fact Finder"

With this app is not possible to crawl it like on the previous ReproduciIt article

ReproduceIt: FiveThirtyEight - The Three Types Of Adam Sandler Movies

ReproduceIt is a series of articles that reproduce the results from data analysis articles focusing on having open data and open code. All the code and data is available on github: reproduceit-538-adam-sandler-movies. This post contains a more verbose version of the content that will probably get outdated while the github version could be updated including fixes.

I am a fan of FiveThirtyEight and how they do most of their articles based on data analysis I am also a fan of how they open source a lot of their code and data on github. The ReproduceIt series of articles is highly based on them.

In this first article of ReproduceIt I am going to try to reproduce the analysis Walt Hickey did for the article "The Three Types Of Adam Sandler Movies". This particular article is a simple data analysis on Adam Sandler movies and they didn't provide any code or data for it so it think is a nice opportunity to start this series of posts.

ReproduceIt

I have been wanting to restart my blog for a while now. In 2013 I wrote around 20 posts but in 2014 I only wrote 3 times and this is my first post in 2015 and its not even a real one. Last one was more than 6 months ago. Time is one of the reasons but sometimes I have a couple of hours to kill that I would like to use to blog but ideas don't come to me easily sometimes.

Last weekend I attended PyData Dallas 2015 and in some of the talks about data journalism and open data I got a simple but I believe effective idea for getting me writing new posts.

The idea is to take articles from the internet that do some kind of data analysis and reproduce the results. In most of the cases the data and analysis are not published but ...

From zero to storm cluster for scikit-learn classification

Apache storm is a new technology that allows to do real-time computation that its been in the big data news lately and I was curious to try it to see if its really good or is a just the new map-reduce.

One of the first (and no brainer) ideas I had was to do real-time classification of a scikit-learn model, the main issue was that storm is Java and I didn't want to all the integration between Java and Python but after I saw in the pydata videos that the people at Parsely already took some of that pain out with their new streamparse library I had no more excuses to try it.

Storm cluster

I decided to deploy a storm cluster and after failing to use their EC2 scripts I decided to do it myself using salt. I found an amazing step by step tutorial from Michael Noll ...