Monday, February 29, 2016

2016-02-29 Trading Top5s Securities: GE

Scrape for February 29th, 2016, Monday, Happy Leap Year-Day!

Snapshot taken and moved to google drive

scrape:
2016-02-29
Mkt_Cap:BRK.A,BRK.B,BABA,RIO,WFC-L|XOM,WFC,GOOGL,GOOG,VRX
Price:FDML,BSFT,SDRL,UNFI,ENDP|VRX,PHI,ZBRA
Volume:BAC,SIRI,FCX,CHK,QEP,JCP,MRO,AAPL,PFE,GE
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-29 data
bubbles:

Let's analyze GE
geophf:writing geophf$ analyze GE
Wrote analysis files for GE
GE SMA


GE EMA

GE Stochastic Oscillators

Friday, February 26, 2016

2016-02-26 Trading Top5s Securities: GE

Analysis for 2016-02-26

Took snapshot, backed up to google drive.

scrape:
2016-02-26
Mkt_Cap:MSFT,GOOGL,GE,GOOG,BUD|PG,NTES,IBM,COST
Price:TREE,DRII,NPO,TMH,OIBR.C|NTES,ZBRA,TILE,OLED
Volume:BAC,JCP,SWN,PFE,ETE,GE,FCX,NFX,F,SPY
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-26 data
bubbles:

let's analyze GE
geophf:writing geophf$ analyze GE
Wrote analysis files for GE
GE SMA


GE EMA

GE Stochastic Oscillators

Wednesday, February 24, 2016

2016-02-24 Trading Top5s Securities: CHK

Scrape for 2016-02-24

Snapshot of graph database moved to google drive

scrape:
2016-02-24
Mkt_Cap:AAPL,FB,GOOG,GOOGL,BBL|BHP,RIO,GE,WFC
Price:ECA,CHK,DWA,JBT,ABCO|CAR,LMCB,MRD,HTZ
Volume:BAC,CHK,FTR,F,FCX,ECA,GE,SPY,SIRI,QQQ
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-24 data
bubbles:

let's analyze CHK
geophf:writing geophf$ analyze CHK
Wrote analysis files for CHK
CHK SMA



CHK EMA

CHK Stochastic Oscillators

Tuesday, February 23, 2016

2016-02-23 Trading Top5s Securities: MSFT

Scrape for 2016-02-23

Took snapshot of graph db, moved it to google drive

scrape:
2016-02-23
Mkt_Cap:WMT,LMCB,HD,AAPL,MSFT|JPM,GOOGL,RDS.B
Price:LMCB,MOMO,WSO.B,FIT,UWTI|TCK,ANAC,QEP
Volume:BAC,COG,FCX,FTR,CHK,FIT,SIRI,ABX,PFE,AAPL
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-23 data
bubbles:

$AAPL was the strongest-showing today, but we haven't looked at $MSFT in a while; let's analyze it
geophf:writing geophf$ analyze MSFT
Wrote analysis files for MSFT
MSFT SMA



MSFT EMA

MSFT Stochastic Oscillators

Monday, February 22, 2016

2016-02-22 Trading Top5s Securities: FB

Scrape for 2016-02-22

Took database snapshot, moved to google drive
Released scrape script.
scrape:
2016-02-22
Mkt_Cap:AMZN,PTR,FB,MSFT,VRX|HSBC,TAP.A,HON,STZ.B
Price:CHK-D,RDUS,CHK,CIG.C,PBR|STRZB,VRX,INVA,TAP.A,DF
Volume:BAC,FCX,AA,VALE,PFE,SIRI,GE,FB,MRO,AAPL
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-22 data
Relocked scrape script.

Bubbles:

Let's analyze $FB today
geophf:writing geophf$ analyze FB
Wrote analysis files for FB
FB SMA



FB EMA

FB Stochastic Oscillators

Friday, February 19, 2016

2016-02-19 Trading Top5s Securities: MRO

Scrape for 2016-02-19

SHOOT! Forgot to do the daily snapshot! I must make this a habit or else! 

Eheh: so I rewrote my scrape shellscript:
#!/bin/bash

# yeah, well, there it is :/

echo "Did you do your backup today? No?"

exit 1

enscrape `date +"%Y-%m-%d"`
Okay, let's do the scrape:
2016-02-19
Mkt_Cap:AMZN,FB,GOOGL,GOOG,TM|INTC,VRX,MSFT,T
Price:WTW,BRC,ANET,TRN,SWN|LMCB,TAP.A,ECA
Volume:BAC,GRPN,VALE,INTC,PFE,FCX,MRO,SIRI,GE,AAPL
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-19 data
Bubbles:

Let's analyze $MRO; $INTC looked interesting, as it's a same-day cross volume/mkt-cap category winner, but I haven't seen MRO before so let's look into it.
geophf:writing geophf$ analyze MRO
Wrote analysis files for MRO
MRO SMA


MRO EMA


MRO Stochastic Oscillators

2016-02-18 Trading Top5s Securities: AAPL

Scrape for 2016-02-18

First up: snapshot! ... Done.

Now the scrape

2016-02-18
Mkt_Cap:IBM,JNJ,VZ,T,AAPL|GOOGL,GOOG,WMT,FB
Price:IM,CHK-D,LZB,LOPE,JACK|TYL,LMCB,CVRR,WLL
Volume:BAC,DVN,FCX,MRO,PFE,GDX,AAPL,QQQ,KMI,CHK
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-18 data
bubbles


Let's analyze $AAPL because ... like, do you need a reason? Okay, here's a reason:

In fact, that was over 100 reasons why to analyze AAPL! So, here we go:
geophf:writing geophf$ analyze AAPL
Wrote analysis files for AAPL
AAPL SMA



AAPL EMA

AAPL Stochastic Oscillators

Wednesday, February 17, 2016

2016-02-17 Trading Top5s Securities: MSFT

Scrape for 2016-02-17

First let's take a snapshot of the existing database, and ... archived and done.

Now the scrape and upload of today's data.
2016-02-17
Mkt_Cap:GOOG,PTR,MSFT,FB,GOOGL|GILD,LMCB,DD-A,PFE,WFC-L
Price:FOSL,TCK,TRGP,TAC,TRU|ENLC,LMCB,CLMT,BLMN,CASY
Volume:BAC,FCX,MRO,KMI,GRPN,AAPL,FB,QQQ,VALE,MSFT
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-17 data
Bubbles

Let's analyze today's stand-out: $MSFT
geophf:writing geophf$ analyze MSFT
Wrote analysis files for MSFT
MSFT SMA



MSFT EMA

MSFT Stochastic Oscillators

Our Daily Scrape, a recipe

Our Daily Scrape, a recipe

Step 1: Take a snapshot of the database; archive it

See the excellent article, written by the all-around good guy, here (url: http://logicalgraphs.blogspot.com/2016/02/backup-plan9-from-outer-space.html)

Step 2: Scrape

Scrape – described at the bottom of the Trading Analytics tab http://logicalgraphs.blogspot.com/p/trading-analytics.html – is an application that scrapes the top5s securities lists from google finance and saves the results in two places: a graph database (configuration in the environment) and a semi-structured matrix of historical top5s http://lpaste.net/4714982275408723968

The methodology of scrape is this. After the trading day, at 6 pm-ish, you execute scrape:
geophf:writing geophf$ scrape
And it does its thing (see below under 'enscrape').

Now, if you, like I often do, sleep on your keyboard and wake up after midnight do not run scrape! But instead run enscrape with the previous day's date as the argument: the top5s are for yesterday, not today, so we wish to enter those data for that day:
geophf:writing geophf$ enscrape 2016-02-16
######################################################################## 100.0%
Saved to /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/sources/google/2016-02-16-index.html ...

2016-02-16
Mkt_Cap:AAPL,BABA,MSFT,GOOGL,LMCB|QVCB,AIG,FB,ABEV
Price:ADT,GRPN,CRAY,LPLA,CSIQ|NUGT,CYH,STRZB,BVN,LMCB
Volume:BAC,ADT,QQQ,CSCO,SIRI,SPY,INTC,IBN,KGC,ETP
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-16 data
HTTP/1.1 100 Continue

HTTP/1.1 200 OK
Server: nginx
Date: Wed, 17 Feb 2016 09:18:38 GMT
Content-Type: application/json
Content-Length: 125
Connection: keep-alive
Access-Control-Allow-Origin: *

{"results":[{"columns":[],"data":[]},{"columns":[],"data":[]},{"columns":[],"data":[]},{"columns":[],"data":[]}],"errors":[]}

Saved 2016-02-16 top 5s to GrapheneDB

The 'thing' is this: you must run scrape/enscrape before the markets open the next trading day. As soon as the markets open the top5s for the previous day go away and start to fluctuate with the market, minute-to-minute. 

Scrape after 6 pm, enscrape before 9 am (hopefully before 8 am) and that your window; don't screw up the data by violating that window.

Fer realz, yo.

Step 3: Capture Top5s Data in the Daily Report

So, you have your daily reports – e.g.: http://logicalgraphs.blogspot.com/2016/02/2016-02-12-trading-top5s-securities-ge.html – divided into two parts: the reportage and the analysis. The reportage is this:

Copy and paste the top5s for the day into the report. That is, from the above scrape/enscrape run-off, copy:
2016-02-16
Mkt_Cap:AAPL,BABA,MSFT,GOOGL,LMCB|QVCB,AIG,FB,ABEV
Price:ADT,GRPN,CRAY,LPLA,CSIQ|NUGT,CYH,STRZB,BVN,LMCB
Volume:BAC,ADT,QQQ,CSCO,SIRI,SPY,INTC,IBN,KGC,ETP
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-16 data
... into the daily report.

Open up your graph database and get a screen shot of today's top5s with the previous day's top5s categories expanded to see interesting day-to-day trends (precursor to analysis), e.g.:

Note in the above screen shot that before I did scrape, I exported a snapshot of the database (then shunted that export off to my company's google drive).

Step 4: Analysis

Note that went you expand and tease apart the graph of today's top5s, Some Stocks Start to Stand out Stupendously (I call it the S5-effect) (I just invented that, actually). Pick the one that's of interest to you.

interest, n.: 1. what you don't get on your money in a savings account anymore
2. whatever is of interest to you, see: 'interest.'

That's very ... 'helpful'! NOT! 

So, to help in a substantive way, I am developing tools to automate the 'feelz' for what's interesting – particularly the Repeatinator2000! http://lpaste.net/781423227393015808 and the new, improved GAPINATOR3004!! http://lpaste.net/5017845158461308928 – but these are very much alpha-stage tools at present, so you have to come up with or develop with practice your own feel for what looks interesting to you for now.

You know: make your own decisions, ... liek: on your own, liek.

Today, I picked out LMCB as it has multiple connections, and I hadn't seen in before.

When you pick a stock, run it through analyze, a program described at the bottom of the Trading Analytics tab: http://logicalgraphs.blogspot.com/p/trading-analytics.html
geophf:writing geophf$ analyze LMCB
analyze: Ratio has zero denominator

Okay, whoopsie! This does happen (like twice in the last nine months), and it happens on johnny-come-latelies to the top5s list, that is, possibly, newly minted billion+-dollar companies that don't have 3 months of trading history.

Possible? Maybe? Maybe they just went public, or maybe they changed stock symbols and their old trading information doesn't carry forward?

I don't know. I don't care. I just move on and pick a different stock to analyze.

So, missed opportunities here? Perhaps. And to convince me of that, write a white paper on how I am missing out big-time on these rare opportunities.

Anyway.

So, let's regroup. I just picked BAC because it had some good things going, but, in retrospect (i.e.: if after I had slept on this, perhaps), maybe LPLA would've been an interesting case for analysis.

The Markets: so many interesting case studies! So little time!
geophf:writing geophf$ analyze BAC
Wrote analysis files for BAC
The analyze tool spits out three CSV files: BAC-EMAS.csv, BAC-kds.csv, and BAC-SMAS.csv. Since these are comma-separated value files, you can easily load them into a data visualization tool of your choice (e.g.: Excel for you, maybe. Me? I use Numbers, because I'm not stupid: I own a Mac), then take screen shots of your analytics results. You can read up on what SMAs, EMAs and the %K vs. the %D lines of Stochastic Oscillators mean on investopedia, e.g.: here's the write up on SMAS: http://www.investopedia.com/terms/s/sma.asp 

With this completed report, I blog it (sample link at the top of this recipe article), and also tweet the graph and three charts on our company's @logicalgraphs twitter account.


Easy-peasy!

2016-02-16 Trading Top5s Securities: BAC

Scrape for 2016-02-16
2016-02-16
Mkt_Cap:AAPL,BABA,MSFT,GOOGL,LMCB|QVCB,AIG,FB,ABEV
Price:ADT,GRPN,CRAY,LPLA,CSIQ|NUGT,CYH,STRZB,BVN,LMCB
Volume:BAC,ADT,QQQ,CSCO,SIRI,SPY,INTC,IBN,KGC,ETP
Updated /Users/geophf/Documents/OneDrive/work/1HaskellADay/Seer/data/top5s.csv with 2016-02-16 data
Bubbles

Let's analyze $BAC
geophf:writing geophf$ analyze BAC
Wrote analysis files for BAC
BAC SMA



BAC EMA

BAC Stochastic Oscillators

Monday, February 15, 2016

Plan C: Fly in to the Danger Zone, or: Backup Restore

Okay, you are making and archiving daily snapshots of your database. Great! You are lightyears ahead of most small businesses.

But do you know what you've got? The first step of data security is your backup, the second step is the restore of a backup. You don't know if you have your data safely saved off if you don't know you can restore from that save to full operability, so, test that, and get that assurance, now, when you're not out of the frying pan and into the fire, but now, when things are sailing smoothly at an even keel.

Yes, I am the master of the metaphor, so much so that I am actually a `pataphorist.

So.

Let's do this.

You have your backup. You have two ways to test that restore will work when it needs to:

1. blow away your production data and restore that.

That, right there, is your 100%-guarantee for that snapshot, I tell you what. And you can do that, because one day, you're gonna hafta.

(dos words, doe)

2. In a lower environment, restore the snapshots and do your quality assurance tests there.

Most people will pick option 2. Perfectly fine. It's not 100%, but it does give you the confidence that you can restore your snapshot.

We'll go over option 2. in brief, because I'm actually going to do option 1 in this essay: I'm going to blow away my production database and restore a snapshot there. Why? Because I'm going to do this article ... LIKE A BOSS!

Backup Restore Locally

So, option 2: have on your local system/laptop the same version of neo4j that is running your production data on GrapheneDB DaaS.

Go into your neo4j data directory and just blow away graph.db/ there

geophf:1HaskellADay geophf$ ls neo4j-community-2.3.0/data
README.txt dbms graph.db import log
geophf:1HaskellADay geophf$ rm -rf neo4j-community-2.3.0/data/graph.db/
geophf:1HaskellADay geophf$ ls neo4j-community-2.3.0/data
README.txt dbms import log

Next, localize your snapshot and restore it to your neo4j/data directory

geophf:1HaskellADay geophf$ ls -l ~/Desktop/logical-graphs/backups/
total 680
-rw-r-----@ 1 geophf  staff  344810 Feb 14 21:34 20160215-022930.zip
geophf:1HaskellADay geophf$ cp ~/Desktop/logical-graphs/backups/20160215-022930.zip neo4j-community-2.3.0/data
geophf:1HaskellADay geophf$ cd neo4j-community-2.3.0/data/
geophf:data geophf$ mkdir graph.db
geophf:data geophf$ mv 20160215-022930.zip graph.db/
geophf:data geophf$ cd graph.db/
geophf:graph.db geophf$ unzip 20160215-022930.zip 
... unzips database ...

now remove the copy of your snapshot.

geophf:graph.db geophf$ rm 20160215-022930.zip 

Now start up your instance and check to see your data is restored from your snapshot.

geophf:1HaskellADay geophf$ ./neo4j-community-2.3.0/bin/neo4j start

And the Cypher query in the web-client shows, yes,
a full restore:



Okay, that was a nice, cool restore of a snapshot in a nice, safe place: locally.

Backup Restore in Production

Let's do this in production now.

And how we do this is to fly right into the danger zone. Go to the Admin tab of your GrapheneDB instance and select "Empty Database":





You'll get this challenge.

Challenge: accepted.

Once the database is emptied, it takes you to the overview page which shows you your database is now, indeed, empty:


Okay, we've emptied our database (another way to go about this is to delete your database and then to create a new one, but then you have to worry about settings, such as security configuration and connection information. I just went with the "Empty Database"-option), our next step is to restore the snapshot.

Let's do that. Return to the admin tab, and go back into the Danger Zone, this time choose the "Restore database"-option. Once selected, you're challenged to give a snapshot (and instructions on how to create one from your local database if you don't have a snapshot already). The location can be either local or from cloud storage, e.g. from AWS S3, that is if your GrapheneDB user has access to these services (something you have to work with your security to verify). I have a local copy, so I provide that. Doing that, select "Restore."

Several modal dialogs flash in rapid succession to notify you of progress in the restore process, then you get the following message:

Great! Again, let's verify. The overview page, once refreshed (so, hint: don't panic; you need to refresh manually here!), shows a populated database:

And we can choose to dig deeply as we need to in order to verify our data is restored from our snapshot:

Yay! Happy ending! And in Production, too.

Plan C

That was painless.

But what if it weren't? What if you restored and the snapshot you have is corrupted? Do you have an alternative approach if the restore doesn't work?

I do. I have the source data that I derived from and an ETL process that rebuilds my graph in under 1 minute, and I tested that out yesterday. That's why I had the confidence to take this approach.

Backup restore is a very simple process in general, and here, specifically so, because of the ease at which the GrapheneDB DaaS walks you through the process, holding your hand the whole way.

This is all the more reason for you to do due diligence on your side. If you do a backup restore locally first, you have confidence you can do it on the cloud on your production system (Plan B). Further I had a Plan C of knowing I could rebuild the graph from original sources if I have to.


Always have a Plan B, but if Plan B fails, have Plan C ready, just in case. Nothing like that confidence to proceed with operations like these with production data.

2016-02-15 Markets Closed: Presidents' Day

Today, 2016-02-15, is Presidents' Day in the U.S.A. The Markets are closed today.

Sunday, February 14, 2016

Backup Plan...9 from Outer Space

How to make backups of your neo4j database on GrapheneDB DaaS

If you have a professional plan with GrapheneDB DaaS, then backups, for you, are automated. At the lowest professional tier, backups are only retained for a week, so continue to read the rest of this article for methods for saving data longer-term.

For the hobby-edition owners you get the following message:

Does this mean you are unable to make backups of your database? It appears so, at the face of it, but actually, no, you can make backups, and you can do this daily. The effort is on your side, but when it comes to the case that you've corrupted your database with a dirty load (as I have), these backups are an essential part of restoring integrity to your stored data.

So, let's do this.

How?

You export your database, and then you find some cloud-service to manage these daily exports, or, again, you take on the management of these exports. These exported copies of your database are, ipso facto, your daily backups.

To export your database, go to the Admin-tab and go to the "Export Database"-section. 


Select "Export Database," read the warning about database-stoppage, and select "Export Database," again.



Then, you're moved onto the next modal dialog – "Export Ready" – which offers a download of a snapshot of your database. Select "Download."

Boom! A snapshot of your database is now local. As you see at the bottom of your web-browser in your task-status bar:

Now that you have the snapshot, select "Close" on the modal. Officially, you're 'done.'

Now, what you do with it is up to you. You can move it to a hard medium (e.g.: DVD-RW, or shared network drive), or you can upload it to a cloud service, such as AWS S3 or your google drive. Your choice.

Backup management plan

Of course, you can save every daily backup forever, but what works best? From experience, the older the data gets, the less interesting it gets, so what I do is save the dailies for a week, then, beyond that, I save the first of every prior week up to a month (that is: I delete every non-monday backup older than a week), I save every first of month older than two months, then I save the first of the year. Usually, one finds a mistake the same day one makes it, so the rollback is one, two, or three days. Usually. Everything beyond immediacy is just paranoia, in my experience, so, out of paranoia I save data every month for a year and beyond that, every year. Once, twice or thrice, I've had to look in the archives older than a year to try to restore something I've lost. Oftentimes, the trouble of restoration is so great, I always say 'eh, whatevs,' anyway.

But, you know: saving older data, because reasons, and because everybody does it, so it Must Be Vital for my Health and Well-Being(tm).

As a point of reference the GrapheneDB DaaS saves one week of daily backups. Anything older than a week is discarded.

Nice to see some realists in the technology sector for a change.