Talking about productivity methods

The best way to procrastinate is to research productivity.

Boris Gorelik

This week, the majority of Automattic Data Division meets in person in Vienna. During one of the sessions I presented my productivity method to my friends and coworkers.

Presenting this method was a fun and enjoyable experience for me. I decided to try doing this again, in a more formal and structured way. If you know of a productivity-oriented meetups that might be interested in hearing me, let me know.

Some post-talk notes

It turns out that the method I’m using much closer to Mark Forster’s “Final Version” than to his AutoFocus

During the years, Mark Forster created and tested many time management approaches. Scan through this page http://markforster.squarespace.com/tm-systems to find something that might work for you to find something that might work for you.

Meet me at EuroSciPy 2018

EuroSciPy logo

I am excited to run a data visualization tutorial, and to give a data visualization talk during the 2018 EuroSciPy meeting in Trento, Italy.

My tutorial “Data visualization — from default and suboptimal to efficient and awesome”will take place on Sep 29 at 14:00. This is a two-hours tutorial during which I will cover between two to three examples. I will start with the default Matplotlib graph, and modify it step by step, to make a beautiful aid in technical communication. I will publish the tutorial notebooks immediately after the conference.

My talk “Three most common mistakes in data visualization” will be similar in nature to the one I gave in Barcelona this March, but more condensed and enriched with information I learned since then.

If you plan attending EuroSciPy and want to chat with me about data science, data visualization, or remote working, write a message to boris@gorelik.net.

Full conference program is available here.

Time Series Analysis: When “Good Enough” is Good Enough

My today’s talk at PyCon Israel in a post format.

Data for Breakfast

Being highly professional, many data scientists strive toward the best results possible from a practical perspective. However, let’s face it, in many cases, nobody cares about the neat and elegant models you’ve built. In these cases, fast deployment is pivotal for the adoption of your work — especially if you’re the only one who’s aware of the problem you’re trying to solve.

This is exactly the situation in which I recently found myself. I had the opportunity to touch an unutilized source of complex data, but I knew that I only had a limited time to demonstrate the utility of this data source. While working, I realized it’s not enough that people KNOW about the solution, I had to make sure that people would NEED it. That is why I sacrificed modeling accuracy to create the simplest solution possible. I also had to create a RESTful API server, a visualization…

View original post 1,412 more words

Come to PyData at the Bar Ilan University to hear me talking about anomaly detection

On June 12th, I’ll be talking about anomaly detection and future forecasting when “good enough” is good enough. This lecture is a part of PyCon Israel that takes place between June 11 and 14 in the Bar Ilan University. The conference agenda is very impressive. If “python” or “data” are parts of your professional life, come to this conference!