5 tips on future proofing your Medium posts

So, you’ve decided you want to blog or write on Medium – where all the cool kids hang out. Great. Remember there are other similar platforms to write and blog and at some point Medium (like everything else on the Internet) might lose its appeal or even, god forbid, shutdown.Dont break da internetz

When that happens, what will happen to your posts? How can you and the rest of the internet reach it?

There were numerous occasions in the past (the most recent one was Posterous) where the platform simply died and all the links pointing to it and all of their SEO goodness went to shaite.

You can always run your own server, but not everyone has the time, power or know-how to do so.

Here are few tips to help you be forward compatible with most platforms in the future:

  1. Choose a platform that supports a custom domain. If you can’t blog under your own domain you will never truly own your content.
  2. Make sure to blog under your own domain (or subdomain such as blog.mycooldomain.com). If you don’t own a domain – get one. It’s very cheap as it costs anywhere between ~$3-$10/year depending on the type .com, .net, .co and domain registrar (I like namecheap.com). Medium added support for that on March 2015 so you have no excuse.
  3. Make sure you have some sort of backup for your posts. I usually like to write my posts without formatting on Google Docs or Simplenote so I get an immediate backup. Before publishing I copy the content to the publishing platform I use. Doing so will make sure that even if your platform goes down you can always restore your posts using your domain and back up of the posts. Another benefit is that you don’t need to rely on an export feature that your dying platform may or may not provide.If you are slightly more technically advanced I suggest writing the posts in Markdown. There are various tools to generate better looking HTML that you can later paste into your current cool blogging site.
  4. Make backups of attached/uploaded resources. If your posts contain images or other resources that you have uploaded to your chosen platform, make sure to have copies of these files so that you can always restore it with the post text in other platforms if needed.
  5. Save all of your posts’ URLs. Make a document or spreadsheet of all your posts’ URLs. For example, if a blog post URL is http://mycooldomain.com/2015/12/31/something-cool make sure to copy and save it. If your chosen blogging platform goes down you can always add a redirect rule from the old URL (as it appeared in the old platform) to how it will appear in the new one, thus not breaking da internetz!

These rule don’t apply just to Medium, they apply to most platforms such as WordPress (wordpress.com or self hosted one), any static site generator, Ghost, Svbtle, Squarespace, Weebly, etc.

In my opinion, the best choice nowadays for people who don’t want to mess around with server is to use a static site generator such as Jekyll. While it involves running a few commands in your shell (that black screen with running white text) you can easily build a site, generate it and host it on platforms such as Surge or Netlify.

Lets Encrypt Error: The server could not connect to the client to verify the domain :: Failed to connect to host for DVSNI challenge

Are you using Lets Encrypt? (If not, you should go ahead and use it to generate SSL certificates to ALL of your web servers).

If you want to run it on EC2 or GCE using the –standalone argument (./letsencrypt-auto certonly –standalone -d example.com) make sure port 443 (for SSL) is open on that server.

Otherwise you’ll get the infamous:

The server could not connect to the client to verify the domain :: Failed to connect to host for DVSNI challenge

Go ahead. Install it. Today.

Tornado’s secure cookie support in Flask

tornado-cookie-flaskI’ve recently had the chance to write a new project on AppEngine.

It’s been a long time since I tried I was too lazy (as always) to setup servers just for that.

I’ve decided to use Python but just to be sure I won’t be vendor locked into various AppEngine services I’ve decided to use:

  • Flask (instead of webapp2)
  • Cloud SQL (instead of DataStore)

This will ensure that I can break out of AppEngine easily with minimal code changes.

This was the first major Flask project I’ve written and I found its current cookie support a bit lacking compared to Tornado’s secure cookies (I won’t go into the debate of why it should be kept like that and why I’m not using a session cookie that points to the real session data somewhere else).

I’ve decided to create a small module to add Tornado’s secure cookie support into Flask.

It’s basically a modified version of the current Tornado Secure Cookie code and its quite easy to use in Flask as well.

Grab it and share your comments and opinions. It’s also available on PyPI under the name “flask-secure-cookie“.

nsq-to-gs – Streaming NSQ messages directly to Google Cloud Storage

nsq-to-googlestorage

In addition to my previously published (very early) project to stream NSQ messages directly to BigQuery, I am happy to presents a modified version of nsq-to-s3 that supports streaming NSQ messages directly Google Cloud Storage.

Grab it while its hot from the nsq-to-gs repo.

I do see a future for a merged version of these two projects that supports both S3 and Google Cloud Storage but this would have to be enough for now.

 

The current version has the same functionality as the latest nsq-to-s3 version and was adapted to support Google Storage with minor modifications (such as the default path and filename formats).

nsq-to-bigquery – Stream messages from NSQ directly to Google BigQuery

nsq-to-bigquery

In the spirit of nsq-to-XXX such as nsq-to-http and nsq-to-file – I bring you the very first version of nsq-to-bigquery.

nsq-to-bigquery, as the name suggest, streams data from an NSQ channel into Google’s BigQuery using the Streaming API and provide very effective means to stream data that should be then further analysed and aggregated by BigQuery’s excellent performance.

This is a (very) initial version so it has some limitations and assumptions.

Limitations / Assumptions

  • The BigQuery table MUST exist prior to streaming the data
  • The NSQ message being sent MUST be a valid JSON string
  • The JSON format MUST be a simple flat dictionary (key and simple value. Value can’t be another dictionary or list)
  • The JSON format MUST match the format of the BigQuery table
  • At the moment there is no support for batching so each message will issue an API call to BigQuery with a single line of data

Planed Features:

  • Support batching with flushing based on X number of rows or Y amount of time passed since last flush
  • Flushing will happen in parallel with receiving information so there is almost no delay

Stay tuned on the github repo for more news.

gonionoo – Go wrapper for the Tor Network Status Protocol – OnionOO

I’ve bene running a Tor exit node in the Netherlands since August 2013. I believe in the cause of Tor and it was only a matter of time before I started adding code in some for or another.

gonionoo is Go wrapper for OnionOO – the Tor Network Status protocol as is the first step in a slightly larger project I’m working on that I’ve been planning for a while ever since I’ve became a Tor exit node operator.

The OnionOO API has lots of interesting data on the Tor network. You can see it visualized as part of the Atlas project.

MongoDB Replica-Set Aware Backup Script

I’ve created a nice little bash script to take MongoDB backups that is replicaset aware.

It will only take a backup from a replica so if you have the classic master,replica,arbiter configuration you can setup the script via cron on both (current) master and replica and the backup will only run on the replica.

It will then tar.gz the backup and upload it to Google Storage. It can be easily adapted to upload the backup to S3 using s3cmd or the aws cli (aws-cli).

Cross posted at Forecast:Cloudy (my cloud blog).

Seedcamp Tel-Aviv 2012

It’s that time of the year and Seedcamp Tel-Aviv is back (for the forth year!). This time lool Ventures is part of the event!

In one of my hats I’m the CTO of lool Ventures and I’ll be there as a mentor to give advice and share from my experience in building a startup.

So if you have a great idea and started to work on it be sure to apply now.

 

 

Requiem for a modem

Two days ago I’ve shut down the longest running electronics device I ever owned.

Alcatel SpeedTouch Home - Image from isphelp.info

The device was my an Alcatel Speedtouch Home ADSL modem which I got circa 2001 when I was lucky enough to get an ADSL line at home.

It was only turned off when there was a power failure or when I moved an apartment.

It survived 6 PC, 5 Laptops, 4 routers, 6 apartments spanning 4 cities and about 10 different cell phones.

It was hacked to use PPPoE instead of its default PPTP. Was hacked again to function as a router, and back to being just a modem.

When I started using it I had a 1.5Mbit ADSL line. It grew to 2.5Mbit and finally 5Mbps – its maximum supported speed (taking into account the infrastructure state, my distance from the switchboard, etc).

When the New Generation Network (NGN) of my landline provider Bezeq was deployed, the modem couldn’t  keep up with its 5Mbit speed because the uplink speed changed and it couldn’t sync. I downgraded to 2.5Mbps until I could get a replacement modem.

Once I got the newer modem, I shutdown the old one for good. It was now obsolete, old and unable to support faster speeds. No one would want it. No one would need it. No one would use it.

I will always remember it as the device that saved me from my happy dial-up days and brought me into the broadband age. It never failed, never stopped working and handled whatever bits were thrown at it.

It is now time for you to rest in modems heaven, where the line is always synced and the bits flow freely.

May all my current and future modems will serve me as well as you did.

Goodbye old friend. We had good times.

Scott Berkun’s Mindfire: Big Ideas For Curious Minds – Book Review

I had the pleasure of reading Scott Berkun‘s newest book – Mindfire: Big Ideas for Curious Minds. I was also forunate to get it for free in the short period of time where Scott gave it for free on his site, but this is not a guilty book review of getting the book for free.

Mindfire is a collection of 30 essays which Scott wrote in various places, mostly on his blog. The essays got cleaned up and preped for the book which made the reading very clean and flowing. Scott’s writing style is very flowing and funny and while it may seem at times as a self emporment / self help book it really isn’t.

I look at it more as a collection of percise and clear set of obersvations on the human condition and behavior alone and in a group. Some of the essays specifically talk about work related situtations, but in most cases  you can apply some of the tips and wisdom of this book to almost any interaction with other people.

I really enjoyed reading it and in some situations fully sympathize with the eassy’s topic and resolution.