Category Archives: Technology

Digital Cleanse

I’ve been thinking about how much of my time is leaking to the digital, always-available norm that exists, especially in the high tech industry that I work.  Basically, I’m in the same group of folks who are wired up to the inbox and always available.  This has become the norm because it is just easy to make this what you do and it creeps into life one television, computer, email, cell phone, text message, Google search, Facebook post, Tweet, Slack message at a time.  I am defining leaking as spending time without intent, to be blunt wasting time.  One of many, many examples is that I am on my phone during my commute reading about my work industry and I get distracted by a fancy algorithm that presents plenty of additional opportunities to wander off like a wild monkey through the forest – and I do, way too often. 

As a computer person, I most definitely need to spend tons of time in the digital world, so I am specifically talking about focusing on optimizing around intent and I am not talking about throwing my phone away and taking up a new career outside of the high tech industry.  I am performing a personal experiment by taking some time to reflect, 28 days in February, on what is important to me and then mapping what technologies I should use to add value to what I enjoy doing, work on, and participate in – in a nutshell, what is my intent when I connect.  The taking time part is important because while I will start by spending an hour to jot down the starting point, I want to have some daily checkpoints integrated within the day-to-day family activities, work-life and personal time to make sure that I am capturing everything, reflecting on importance, and making some decisions about intent.  Once I have my value framework mapped to digital tools and intent, I’ll spend the next couple of months – March to June – sticking to the plan, logging time, tools and activities while also noting what works, what does not work and making improvements along the way.  I’ll share some details in a future post.


Persistent NAS Mounts on Mac

Image result for mount NAS drive afp I use a 5-bay Synology network-attached server (“NAS”), Model 1513+, with 5 x 6TB drives in a number of different RAID configurations for various storage needs such as a personal media network, backups, and general storage.  There are plenty of things that you can accomplish by browsing to the interface that Synology provides called DSM.  However, I want to mount a NAS share points from my Mac client machine in a manner that persists across client restarts, log out/in, etc.  It wasn’t as simple as Googling a simple post that explains exactly what I needed to do to accomplish this.  You’ll need to use the command line and you’ll need root access (or at least know how to sudo su) so that you can edit protected files in the /etc directory on your Mac.  You’ll also need to have at least set up a share point on your NAS (I named my storage point /storagesharepoint) Here is what I learned…

To add a persistent mounts to a Mac, you have to modify your /etc/auto_master file.  Mine looks like this before I monkeyed with it:

#
# Automounter master map
#
+auto_master # Use directory service
#/net -hosts -nobrowse,hidefromfinder,nosuid
/home auto_home -nobrowse,hidefromfinder
/Network/Servers -fstab
/- -static

 

I prefer to keep my configurations somewhat separated from base configurations, so I added one line (/nas auto_chip) to include all my automatically mounted configurations and the /etc/auto_master file looks like this now:

#
# Automounter master map
#
+auto_master # Use directory service
#/net -hosts -nobrowse,hidefromfinder,nosuid
/home auto_home -nobrowse,hidefromfinder
/nas auto_chip
/Network/Servers -fstab
/- -static

The purpose of this addition is to map everything including in the /etc/auto_chip file to the Mac’s local /nas directory.

My /etc/auto_chip file looks like this (you can have multiple mappings configured):

# General storage mount
storage -fstype=afp afp://[username]:[password]@ds.local/storagesharepoint


Here is what is going on with this line in my /etc/auto_chip file:
  • storage is the local Mac directory where the NAS share points will be mapped.
  • afp is the file protocol that I am using to communicate with the NAS over the network, other file protocols are supported, I am just using this one.
  • [username] is the NAS user who has permission to access the share point on the NAS.  You’ll have to replace this, including the brackets “[” “]”, with your actual username.
  • [password] is the same NAS user’s password and here too you’ll have to replace this with your actual password.
  • ds.local/storgagesharepoint this is the name of the NAS sharepoint.  ds is the name of my NAS device and storagesharepoint is the name of my share point.   Here you’ll have to modify this to reflect your NAS device name and share point name.
Finally, I had to create the /nas/storage directory on my Mac, so the command was:

mkdir -p /nas/storage
That’s it.  Now I can operate on my Mac’s local /nas/storage directory like every other local storage location and all my files will really be stored on my NAS share point where I have RAID file protection, backups, etc…

Engineering Principles – Don’t Over Engineer the Solution

A couple of years ago, I posted some thoughts about high-level architecture and design goals, really they are just the tip of the iceberg.  Here are some additional engineering principles – predominately don’t over-engineer and leverage the cloud.  Complex systems are difficult and costly to build, very expensive to maintain and scale poorly; be on the lookout for complex designs and sanity check yourself by explaining your design to a peer – hard to understand means very bad design.  Simplify, simplify and then simplify again while you follow the 80/20 rule.  Over-engineering is typically the root cause of many costly problems and a very common mistake so I thought I would add a few half-baked ideas here at the end of this decade.

In addition to over-engineering, engineers like to build stuff themselves – sometimes even things that other folks have already sunk many thousands of hours building, enabling many businesses to make better use of their expensive engineering resources.  Don’t reinvent the wheel – open-source commodity services and cloud-based managed-services will routinely enable you to quickly and efficiently focus your attention on creating value.

Take risks and learn from mistakes

Agile teams value failing fast; discuss and learn from your failures.  Engineering culture tends to be risk-averse, resist this tendency by aggressively learning through doing while understanding the risks – there is no need to mitigate every risk scenario.  Evaluate risk, advocate for experimentation, and transparently communicate to the business.  Making use of feature switches provides the ability to experiment in a production environment.  

Failing to design for rollback is a faulty design

When an agile team is aggressively taking risks, experimenting in production environments using features switches and expecting to fail fast, it is also essential that you can easily rollback code, data migrations and configuration changes in an underlying system.  The rollback should be validated to work in a staging environment.  Application complexity and frequent code releases are not acceptable reasons to not invest in rollback.

Don’t depend upon QA to find errors

It is impossible to replicate a production environment for testing – it is expensive to keep in sync, it doesn’t have critical user interaction and it will not have accurate customer data.  An agile team will emphasize experimentation in production over quality assurance during feature development and deploy small incremental releases with wire on and off functionality.

Design your application to be monitored

If you are interested in changing the conversation from the number of bugs to customer impact and resolutions times then you should learn to love useful monitoring as it can actually help you develop the product, decrease debugging times and understand customer impact; if you are building a native cloud-based product, use their logging services.  Think about your logging strategy up-front by asking is there a problem, what is the problem and where did the problem start.  Inefficient or non-purposeful logging will significantly increase storage and compute costs.  Age the logs and ship the log data to a central location to improve usability and reduce incident response times.  Logging should include product feature logging, analytics, to enable the business to make data-driven decisions about the use and usefulness of features.

Design for Statelessness, asynchronous communications and relax temporal constraints

Want to scale?  Independent microservice tiers that use asynchronous communications – queues using message-driven publisher-subscriber architectures – are essential.  It is very rare to need to maintain state server-side and an agile team will proactively guard against these types of architectures; instead, rely on caching and stateless deployments such as Lambda and ephemeral machines using AWS Beanstalk and ECS.  Beyond scaling, these principles will also enable modular designs, autonomous functions, fault tolerance, and degraded service rather than outages.

Beyond avoiding maintaining state server-side, alleviate temporal constraints as coupling significantly underminds fault tolerance and scalability; temporal coupling includes synchronous call between systems, systems in series, interactions with users waiting for writes to complete and the evil-twin, chained workflows.


High level architecture and design goals

I come across many businesses that are not grounded in basic high level architecture and design goals.  Basically, they simply leap from features to building stuff, most often this happens because everyone is in a mad rush to just get something working as soon as possible.  The fundamental flaw in this approach is the belief that spending the time to build it “the right way” takes longer and is more expensive.  I’ve found the exact opposite to be true.  And for the record, the idea that you can just slop something together quickly for a minimally viable product and later insert a good architecture is pure non-sense – it never happens, ever.

Here are a few basic guidelines that apply to almost anything that you are building and I’m certain that there are plenty more that are specific to your particular challenge and some of the technology choices that you make along the way.

Support Hierarchical Configuration

Don’t cripple configuration automation and unnecessarily burden your operations team with configuration management.  Design configuration points in a hierarchical fashion such that all deployments derive from a base configuration and implement deployment specific configurations that override the defaults.  Wherever possible configuration should be modifiable at runtime and should be persistent across restarts.

Facilitate Production Troubleshooting

Don’t count on your development team’s access to production, it isn’t a good practice to allow a live debug session attached to your production environment.  When log messages are written to record exception situations they should include as much contextual information as possible in order to enable production support staff to recreate the conditions present at the time of the exception or undo data corruption that results from the error.

Fail Fast

Don’t unnecessarily retry what you already know won’t ever work.  If exceptional conditions occur the system will not be configured or coded in a way that directs it to retry the action that failed. This rule is particularly important where interfaces into 3rd party APIs are being configured.  If a 3rd party API is failing and we have no expectation that is should ever fail (say a pool API that provides us with database connections) there should be no attempt at reconnection.  The exception should be logged as a high priority exception (fatal) and messaged to a management system.

Automate Test Environment Management

Avoid accumulating manual test drag, there is a huge return on investing in test automation.  The design of test infrastructure will include a framework by which the overall test “suite” initial setup (configuration, code, results and data) is configured to a well known state and that individual unit test are otherwise atomic in their own setup and execution.

N+1 Design

Lots of stuff happens in the real world, stuff that you will never anticipate.  Ensure that anything developed has at least one additional instance in the event of failure.  There should never be less than two of anything.

Design for Rollback

Your release will fail, no doubt about it.  Any new design should be backwards compatible with previous releases.  Test your rollback before every release or you will get caught and the impact may be fatal.

Design to Be Disabled

Enable efficient maintenance and minimize outages – planned or unplanned.  Any system or service endpoint should be designed to be capable of being “marked down” or disabled.

Design to be Monitored

There typically are signs that a failure will occur soon, make sure you know that bad things are accumulating.  The system should be able to identify when it is performing differently than it normally operates in addition to alerting when it is not functioning properly.  An example of this principle is instrumenting the application to report performance statistics on page render times or query execution times.

Asynchronous Design

While it is nice to count on quick and efficient compute pathways, high scalability platform often benefit from offloading and distribution but this usually relies on asynchronous designs.  Wherever possible systems should communicate in an asynchronous fashion.

Atomic Compute and Stateless Systems

Don’t attempt to store state outside of your persistent data storage – you’ll unnecessarily create scalability obstacles and cripple your ability to build resilient platforms.

Scale Out Not Up

You’ll eventually not be able to buy a big enough server.  The system should be able to be horizontally split in terms of data, transactions and customers.


Outsource Unicorn

If I tell you that you can purchase a brand new iPhone for $5, what do you think?  Yet if I tell you that you can hire an outsourced developer for $10 / hour, somehow we set aside our dad’s advice that “nothing is free” and “you get what you pay for”.

Over the years, dozens of companies have asked me to join their company to help them “clean up their technology organization which they have unsuccessfully outsourced”.

Tell me if you’ve heard any of these before?

  • You should outsource that projects because engineering resources overseas cost about $10 / hour.
  • Our engineering team is buried for the next 12 months, just outsource that project.
  • That project is a one time project, don’t hire full time resources, just outsource it.
  • We outsourced all our engineering 3 years ago, it isn’t really working and now we need someone to come help us clean up.
  • You can hire a few outsourced resources to augment your core engineering team.

So while I believe that you may know of someone who has successfully outsourced a project, there are some real challenges to getting it to work well.  I’ve learned quite a few lessons along the way – here are some of these learnings.

As you know, communications is always a critical factor in any business – outsourced or not.  Communications in an outsourced relationship, especially offshore, is a very big challenge that is most often completely underestimated.  Not only are there usually language challenges, even if the outsourced team speaks english, but there are time challenges.  You should be prepared to manage workday offsets, sometimes by as much as 12 hours.  Outsourced companies will tell you that their resources will use technology to help communicate efficiently – Skype and Hangouts work poorly in many global scenarios – delay, echo, drops, etc…  Using Wikis, Basecamp, Slack, ticketing systems and email just like you probably use with your core team is helpful, but is not a great substitute.  Outsourced companies will also tell you that they will assign an on-shore resource that will manage the offshore team as a solution to communication and timeshifted work hours.  This approach helps, but again it is not great and at best it will add lots of cost.  It adds the cost of this onshore resource, but it also adds the cost of inefficiency in having a relay system in place.

In addition to communication, there is a very large issue of resource stability over time.  For whatever reason – maybe because they pay their resources poorly, outsource companies don’t seem to retain resources for more than a month or two.  The costs of changing resources is very high, as usual.  You will pay for the lost productivity of having to bring in a new resource and bring that resource up to speed.  Also, if the onshore manager, communication relay that I mention above happens to be the person who leaves your team then the impact is enormous.  Finally, keep in mind that if you reduce your demand for resources, of course the outsource company will move the resources and so starting another phase or even supporting your existing product will become very, very expensive – so manage your demand wisely or the supply will go away.

In my opinion, there is a big difference between engineers and coders.  Coders, simply implement – write lines of instructions to tell a computer what to do.  Understanding a problem and designing an approach to solving the problem using computer instructions is a totally different animal.  Designing an end-to-end platform that involved many, complicated interactions requires even more experience, education and skill – engineering is very, very different than coding.  I have yet to work with an outsource company that employs engineers and never an architect.  The results that I have routinely seen is giant pile of code, mostly undocumented and never, ever efficient.  By this, I mean there is never any use of modern object oriented constructs such as inheritance, if a routine is needed elsewhere – the code is copied and pasted elsewhere.  So when you have a bug, you probably have that bug in 10 places.  To make matters even worse, I’m just describing simple procedural routines.  I never, ever see good use of more advanced engineering concepts – for example, multi-threading or distributed processing.  All of this leads to a maintenance nightmare.  Sometimes you can lower the impact of this by hiring a full time employee as your architect, but even this is not foolproof.

The above are very serious landmines that are not easy to circumnavigate – you are not going to write an iron clad contract and secure a fixed bid contract to avoid all of this.  And while you may conclude that you should never use outsourced development, that is not really my mission here – you can use it, but it will be far more expensive than advertised and you will need to invest significantly in avoiding issues that will turn your project into a big problem.


What this CTO does.

The role of a CTO is a topic that inspires many to ask so many questions.  What is the difference between a CTO and a VP of Engineering?  What to look for in a startup CTO?  There are literally hundreds of posts on the topic and deep comments threads explaining various experiences in various scenarios.  I think that is exactly the take away; the specific scenario matters.  The specifics being important and not unique to the CTO role.  A good sales leader in one business doesn’t necessarily work in all the other businesses.

My experience as a CTO has been in the area of building native cloud based, highly scalable platforms from the ground up.  The platforms have been built to process massive amounts of data, high transaction volumes and large numbers of concurrent users or device interactions.  When I say from the ground up, I mean that I run around with powerpoint slides and wireframes describing the business before a line of code is written.  The first goal is alway to sanity check the business idea.  The idea always changes, often in big ways (aka. the infamous “pivot”) – so it most likely is a total waste of time and money to write code to aid in describing the business value.  I’ve raised at least $20M with PPT and zero code.  Yet, I’m literally contacted two times a week by folks that want help or advice in writing prototype code so they can raise money.  What?  You might be talking with the wrong folks if they need a prototype or maybe you are not explaining your idea clearly.

Unfortunately, I don’t get to write code these days.  I’ve read all the comments about if the CTO doesn’t write code then go get someone else.  That advice is total baloney.  What is much more likely to be true is that an architect or lead developer is needed for that particular business and you should not being describing the role as CTO.

So what do I do as a CTO?  Thinking back over the past companies that I’ve been a CTO, I spend a lot of time translating business ideas to a platform concept or dear I say, vision.  I also spend time researching available technology to support the future platform architecture – I refuse to build things that already exist, I want to concentrate on building the new, non-existent stuff as quickly as possible.  I also spend time reverse translating platform capabilities to business use cases – in other words, how does the technology support solving the important business problem – in the end, that is the entire point.

I’m deeply involved in contract negotiations, especially in relationships that involve critical technology partners.  There are many important aspects of technology partnerships that will cto-areas-of-responsibilitymake or break the value of your platform as an asset and in turn, maybe even your entire business model.  For example, data rights and protection of intellectual property are critical.  Similarly, I spend a lot of time in helping sales and business development understand customer use cases and mapping it to our platform, or in identifying creative ways to fill gaps or in recognizing a general pattern that needs to be built into our platform.

I’ve been part of businesses that have raised hundreds of millions of dollars and have sold several of these businesses.  I spend a lot of time in the due diligence process – explaining our platform inside and out, describing our discipline and investments, forecasting future ideas and in general supporting the idea that we have a technology asset that is truly worth something and supports our business idea.

I’m responsible for recognizing intellectual property and doing something about it.  Notice that I’m not just leaving it as, filing a patent.  That is because filing a patent is just one small thing that you do to protect your IP; quite honestly, probably not the most important.  I believe that if you have something that is true IP, that you need to get it to market quickly and run with it as fast as possible because that is what makes it really defendable and it often is the key business differentiator that every business wants to include in their pitch.

I spend a good amount of time identifying and understanding emerging technology and trends.  I look for opportunities to use or integrate in support of solving problems or filling gaps; I’m also interested in potential threats or disruptors.

Finally, just beyond the first couple of weeks and months, there is a ton of team work investments and operational stuff that I need to make and this grows with the size of the business.  Having bought and sold a few business, I understand how companies evaluate a technology based business and what value is assigned upon acquisition but also how companies forecast future investment requirements.  In other words, buying a great technology idea running in my basement with no documentation run by cowboys is far less valuable than a well documented platform, running in a fault tolerant architecture cared for by a top-notch team of professionals with verifiable workflows.  Building that isn’t free, but well worth paying those investment taxes along the way.

See no code, sadly for me – but then, I’m not the lead developer or architect or even the engineering manager.

 


My Bash Profile

In the past couple of weeks, several folks have asked me to share my bash profile.  I’m not sure why, maybe better bash profiles are high on everyone’s new year’s resolutions. 🙂 This post should also give my daughter a good chuckle and reassure her that I’m still a geek.

In any case, these days I mostly work with Macs and Ubuntu AWS instances and the bash profiles are a bit different.  I’ll document my Mac profile, I’m sure you can tweak to your OS.

Finally, these are the configurations that I have found personally useful in my work. I’ve accumulated this profile from many folks that I’ve worked with and several online resources – sorry if I’m not properly giving full attribution, I assure you nearly none of these configurations have come from my own thinking.

Enjoy.


# -------------------------------------------------------------------
# Description: This file holds all my BASH configurations and aliases
# Sections:
# 1. Include other sources
# 2. Environment Configuration
# 3. File and Folder Management
# 4. Searching
# 5. Process Management
# 6. Networking
# 7. System Operations & Information
# 8. Development
# --------------------------------------------------------------------

# --------------------------------
# 1. Include other sources
# --------------------------------

# Source any base profile
[[ -s "$HOME/.profile" ]] && source "$HOME/.profile" # Load the default .profile

# Source Bash base aliases
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi

# -------------------------------
# 2. ENVIRONMENT CONFIGURATION
# -------------------------------

# Change Prompt
# ------------------------------------------------------------
export PS1="______________________________\n| \w @ \h (\u) \n| => "
export PS2="| => "

# Set Default Editor
# ------------------------------------------------------------
export EDITOR=/usr/bin/vi
export SVN_EDITOR=vi

# Set default blocksize for ls, df, du
# from this: http://hints.macworld.com/comment.php?mode=view&cid=24491
# ------------------------------------------------------------
export BLOCKSIZE=1k

# IMPORTANT: GREP MODS/CHANGES to DEFAULTS
# ------------------------------------------------------------
export GREP_OPTIONS='-D skip --binary-files=without-match --ignore-case'

# Maven
# ------------------------------------------------------------

export M2_HOME=/usr/local/maven
export M2=$M2_HOME/bin

# MySQL
# ------------------------------------------------------------
export MYSQL_HOME=/usr/local/mysql

# ignore .svn in filename completion
# ------------------------------------------------------------
export FIGNORE=.svn

# JAVA
# ------------------------------------------------------------
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_71.jdk/Contents/Home

# EC2 Tools
# ------------------------------------------------------------
export EC2_HOME=/usr/local/ec2

# EC2 Chip's Environment
# ------------------------------------------------------------
export AWS_ACCESS_KEY="yourkeyhere"
export AWS_SECRET_KEY="yoursecrethere"
export EC2_CERT=~/.ssh/yourcerthere.pem
export EC2_PRIVATE_KEY=~/.ssh/yourkeyhere.pem

# My Scripts
# ------------------------------------------------------------
export SCRIPT_HOME=/usr/local/scripts

# SET VI MODE
# ------------------------------------------------------------
set -o vi

# Set Paths
# ------------------------------------------------------------
export PATH=".:$PATH:/usr/local/sbin:/usr/local/mysql/bin:$M2:$EC2_HOME/bin:$SCRIPT_HOME"

# Command configs
# -----------------------------
alias cp='cp -iv' # Preferred 'cp' implementation
alias mv='mv -iv' # Preferred 'mv' implementation
alias mkdir='mkdir -pv' # Preferred 'mkdir' implementation
alias ll='ls -FGlAhp' # Preferred 'ls' implementation
alias less='less -FSRXc' # Preferred 'less' implementation

# ls family
# ------------------------------------------------------------
alias ls='ls -la' # Standard
alias lx='ls -lXB' # Sort by extension.
alias lk='ls -lSr' # Sort by size, biggest last.
alias lt='ls -ltr' # Sort by date, most recent last.
alias lc='ls -ltcr' # Sort by/show change time,most recent last.
alias lu='ls -ltur' # Sort by/show access time,most recent last.

# cd family
# ------------------------------------------------------------
cd() { builtin cd "$@"; ll; } # Always list directory contents upon 'cd'
alias cd..='cd ../' # Go back 1 directory level (for fast typers)
alias ..='cd ../' # Go back 1 directory level
alias ...='cd ../../' # Go back 2 directory levels
alias .3='cd ../../../' # Go back 3 directory levels
alias .4='cd ../../../../' # Go back 4 directory levels
alias .5='cd ../../../../../' # Go back 5 directory levels
alias .6='cd ../../../../../../' # Go back 6 directory levels
alias ~="cd ~" # ~: Go Home

# misc
# ------------------------------------------------------------
alias f='open -a Finder ./' # f: Opens current directory in MacOS Finder
alias c='clear' # c: Clear terminal display
alias ducks='du -cks *|sort -rn|head -11' # ducks: List top ten largest files/directories in current directory
alias which='type -all' # which: Find executables
alias path='echo -e ${PATH//:/\\n}' # path: Echo all executable Paths
alias showOptions='shopt' # showOptions: display bash options settings
alias fixStty='stty sane' # fixStty: Restore terminal settings when screwed up
alias cic='set completion-ignore-case On' # cic: Make tab-completion case-insensitive
mcd () { mkdir -p "$1" && cd "$1"; } # mcd: Makes new Dir and jumps inside
trash () { command mv "$@" ~/.Trash ; } # trash: Moves a file to the MacOS trash
ql () { qlmanage -p "$*" >& /dev/null; } # ql: Opens any file in MacOS Quicklook Preview
alias DT='tee ~/Desktop/terminalOut.txt' # DT: Pipe content to file on MacOS Desktop

# lr: Full Recursive Directory Listing
# ------------------------------------------
alias lr='ls -R | grep ":$" | sed -e '\''s/:$//'\'' -e '\''s/[^-][^\/]*\//--/g'\'' -e '\''s/^/ /'\'' -e '\''s/-/|/'\'' | less'

# mans: Search manpage given in argument '1' for term given in argument '2' (case insensitive)
# displays paginated result with colored search terms and two lines surrounding each hit.
# Example: mans mplayer codec
# --------------------------------------------------------------------
mans () {
man $1 | grep -iC2 --color=always $2 | less
}

# showa: to remind yourself of an alias (given some part of it)
# ------------------------------------------------------------
showa () { /usr/bin/grep --color=always -i -a1 $@ ~/Library/init/bash/aliases.bash | grep -v '^\s*$' | less -FSRXc ; }

# -------------------------------
# 3. FILE AND FOLDER MANAGEMENT
# -------------------------------
zipf () { zip -r "$1".zip "$1" ; } # zipf: To create a ZIP archive of a folder
alias numFiles='echo $(ls -1 | wc -l)' # numFiles: Count of non-hidden files in current dir
alias make1mb='mkfile 1m ./1MB.dat' # make1mb: Creates a file of 1mb size (all zeros)
alias make5mb='mkfile 5m ./5MB.dat' # make5mb: Creates a file of 5mb size (all zeros)
alias make10mb='mkfile 10m ./10MB.dat' # make10mb: Creates a file of 10mb size (all zeros)

# cdf: 'Cd's to frontmost window of MacOS Finder
# ------------------------------------------------------
cdf () {
currFolderPath=$( /usr/bin/osascript <<EOT
tell application "Finder"
try
set currFolder to (folder of the front window as alias)
on error
set currFolder to (path to desktop folder as alias)
end try
POSIX path of currFolder
end tell
EOT
)
echo "cd to \"$currFolderPath\""
cd "$currFolderPath"
}

# extract: Extract most know archives with one command
# ---------------------------------------------------------
extract () {
if [ -f $1 ] ; then
case $1 in
*.tar.bz2) tar xjf $1 ;;
*.tar.gz) tar xzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar e $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xf $1 ;;
*.tbz2) tar xjf $1 ;;
*.tgz) tar xzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via extract()" ;;
esac
else
echo "'$1' is not a valid file"
fi
}

# ---------------------------
# 4. SEARCHING
# ---------------------------

alias qfind="find . -name " # qfind: Quickly search for file
ff () { /usr/bin/find . -name "$@" ; } # ff: Find file under the current directory
ffs () { /usr/bin/find . -name "$@"'*' ; } # ffs: Find file whose name starts with a given string
ffe () { /usr/bin/find . -name '*'"$@" ; } # ffe: Find file whose name ends with a given string
ft() { /usr/bin/find . -name "$2" -exec grep -il "$1" {} \;
} # ft: Find text in any file

# spotlight: Search for a file using MacOS Spotlight's metadata
# -----------------------------------------------------------
spotlight () { mdfind "kMDItemDisplayName == '$@'wc"; }

# ---------------------------
# 5. PROCESS MANAGEMENT
# ---------------------------

# findPid: find out the pid of a specified process
# -----------------------------------------------------
# Note that the command name can be specified via a regex
# E.g. findPid '/d$/' finds pids of all processes with names ending in 'd'
# Without the 'sudo' it will only find processes of the current user
# -----------------------------------------------------
findPid () { lsof -t -c "$@" ; }

# memHogsTop, memHogsPs: Find memory hogs
# -----------------------------------------------------
alias memHogsTop='top -l 1 -o rsize | head -20'
alias memHogsPs='ps wwaxm -o pid,stat,vsize,rss,time,command | head -10'

# cpuHogs: Find CPU hogs
# -----------------------------------------------------
alias cpuHogs='ps wwaxr -o pid,stat,%cpu,time,command | head -10'

# topForever: Continual 'top' listing (every 10 seconds)
# -----------------------------------------------------
alias topForever='top -l 9999999 -s 10 -o cpu'

# ttop: Recommended 'top' invocation to minimize resources
# ------------------------------------------------------------
# Taken from this macosxhints article
# http://www.macosxhints.com/article.php?story=20060816123853639
# ------------------------------------------------------------
alias ttop="top -R -F -s 10 -o rsize"

# myPs: List processes owned by my user:
# ------------------------------------------------------------
myPs() { ps $@ -u $USER -o pid,%cpu,%mem,start,time,bsdtime,command ; }

# tm: Search for a process
# ------------------------------------------------------------
alias tm="ps -ef | grep"

# ---------------------------
# 6. NETWORKING
# ---------------------------

alias myIP='curl ip.appspot.com' # myIP: Public facing IP Address
alias netCons='lsof -i' # netCons: Show all open TCP/IP sockets
alias flushDNS='dscacheutil -flushcache' # flushDNS: Flush out the DNS Cache
alias lsock='sudo /usr/sbin/lsof -i -P' # lsock: Display open sockets
alias lsockU='sudo /usr/sbin/lsof -nP | grep UDP' # lsockU: Display only open UDP sockets
alias lsockT='sudo /usr/sbin/lsof -nP | grep TCP' # lsockT: Display only open TCP sockets
alias ipInfo0='ipconfig getpacket en0' # ipInfo0: Get info on connections for en0
alias ipInfo1='ipconfig getpacket en1' # ipInfo1: Get info on connections for en1
alias openPorts='sudo lsof -i | grep LISTEN' # openPorts: All listening connections
alias showBlocked='sudo ipfw list' # showBlocked: All ipfw rules inc/ blocked IPs

# ii: display useful host related information
# -------------------------------------------------------------------
ii() {
echo -e "\nYou are logged on ${RED}$HOST"
echo -e "\nAdditionnal information:$NC " ; uname -a
echo -e "\n${RED}Users logged on:$NC " ; w -h
echo -e "\n${RED}Current date :$NC " ; date
echo -e "\n${RED}Machine stats :$NC " ; uptime
echo -e "\n${RED}Current network location :$NC " ; scselect
echo -e "\n${RED}Public facing IP Address :$NC " ;myip
#echo -e "\n${RED}DNS Configuration:$NC " ; scutil --dns
echo
}

# ---------------------------------------
# 7. SYSTEMS OPERATIONS & INFORMATION
# ---------------------------------------

# cleanupDS: Recursively delete .DS_Store files
# -------------------------------------------------------------------
alias cleanupDS="find . -type f -name '*.DS_Store' -ls -delete"

# finderShowHidden: Show hidden files in Finder
# finderHideHidden: Hide hidden files in Finder
# -------------------------------------------------------------------
alias finderShowHidden='defaults write com.apple.finder ShowAllFiles TRUE'
alias finderHideHidden='defaults write com.apple.finder ShowAllFiles FALSE'

# cleanupLS: Clean up LaunchServices to remove duplicates in the "Open With" menu
# -----------------------------------------------------------------------------------
alias cleanupLS="/System/Library/Frameworks/CoreServices.framework/Frameworks/LaunchServices.framework/Support/lsregister -kill -r -domain local -domain system -domain user && killall Finder"

# screensaverDesktop: Run a screensaver on the Desktop
# -----------------------------------------------------------------------------------
alias screensaverDesktop='/System/Library/Frameworks/ScreenSaver.framework/Resources/ScreenSaverEngine.app/Contents/MacOS/ScreenSaverEngine -background'

# freq: Which commands do you use the most
# -----------------------------------------
alias freq='cut -f1 -d" " ~/.bash_history | sort | uniq -c | sort -nr | head -n 30'

# CMDFU lookup
cmdfu(){ curl "http://www.commandlinefu.com/commands/matching/$@/$(echo -n $@ | openssl base64)/plaintext"; }

# easily scp a file back to the host you are connecting from and place on Desktop
mecp () { scp "$@" ${SSH_CLIENT%% *}:Desktop/; }

# ---------------------------------------
# 8. DEVELOPMENT
# ---------------------------------------
alias aEdit='sudo edit /etc/apache2/httpd.conf' # aEdit: Edit httpd.conf
alias aLogs="less +F /var/log/apache2/error.log" # aLogs: Shows apache errorlogs
alias aRestart='sudo apachectl graceful' # aRestart: Restart Apache
alias aTail='tail -n 1000 -f /var/log/apache2/error.log /var/log/apache2/access.log' # aTail: Tails HTTP error logs

alias hEdit='sudo edit /etc/hosts' # hEdit: Edit /etc/hosts file
httpHeaders () { /usr/bin/curl -I -L $@ ; } # httpHeaders: Grabs headers from web page

# httpDebug: Download a web page and show info on what took time
# -------------------------------------------------------------------
httpDebug () { /usr/bin/curl $@ -o /dev/null -w "dns: %{time_namelookup} connect: %{time_connect} pretransfer: %{time_pretransfer} starttransfer: %{time_starttransfer} total: %{time_total}\n" ; }

# AWS EC2 Functions
# ---------------------------------------
# Instance id info: pull the 2nd field which is instance id from ec2 info
ec2id() { ec2-describe-instances -H --filter tag:Name="$1"|grep -i instance | awk '/INSTANCE/{print $2}'; }

#ec2elip() { ec2-describe-instances -H --filter tag:Name="$1"|grep -i instance | awk '/INSTANCE/{print $2}'|ec2-describe-addresses --filter=instance-id= -; }
# grabs instance id from lookup and passes it to find associated elastic ips

ec2elip() { local awsid=`ec2-describe-instances -H --filter tag:Name="$1"|grep -i instance | awk '/INSTANCE/{print $2}'`; echo $awsid; ec2-describe-addresses -F instance-id="$awsid"; }
ec2info() { ec2-describe-instances -H --filter tag:Name="$1"; };

# EC2 VOLUME COMMANDS
# awk or cut -f 3 would work
ec2volinfo() { ec2-describe-instances -H --filter tag:Name="$1"|grep -i vol | awk '/BLOCKDEVICE/{print $3}' | ec2-describe-volumes -;}
ec2volsnap() { ec2-create-snapshot "$1" "$2"; } # use ec2volinfo first to get vol id

# IP Address info: pull the 4th field which is ip address from ec2 info
ec2ip() { ec2-describe-instances -H --filter tag:Name="$1" |grep -i instance | awk '/INSTANCE/{print $4}'; }


Is a Personal Media Network Possible?

Is it possible to build a personal media network that allows you to collect, organize and stream video, pictures and music to all your devices – TV, phone, computer, game console?

There seems to be a couple of key components:

  • A media server – organizes your content wherever you keep it, transcode content appropriately for various devices and manage streaming tasks.
  • Content collection – content exists in many place – your personal library, streaming services (Netflix, Amazon, Hulu, YouTube, Pandora, etc.), NZB newsfeeds, Torrent sites and sprinkled throughout many other content sites (video news stories, blogs, sports sites, etc.).  While some of these services provide built in interfaces for searching and accessing, it would be nice to make use of some type of internet PVR technology to manage your interest and have your media network automatically collect content for you so that you can make use of it when you are ready.
  • Download agents – in some cases, you’ll need to download content rather than stream the media directly from a source.  Of course, downloading also means storing the content some place – hard drives, NAS devices and cloud storage are all options with various capacity, technical features and costs.
  • Utility – I don’t think there is all-in-one solution, so there are a few surrounding utilities, scripts and sites that might make a personal media network more automated and easier to use.

MediaNetwork
Some other features that might be valuable.  It would be great if the media network could:

  • Organize the stored media files – folders, file naming, etc.
  • Collect meta data – art work, descriptions, plot summaries, etc.
  • Present one simple interface for searching, recently added, view by artist, genre, year, etc.
  • Share will family and friends, inside your home and externally
  • Make your media reachable even when you’re offline – syncing to mobile devices.
  • Transcode for your viewing device’s capabilities – screen size, format, etc.
  • Support saving it now, watching it later
  • Manage media that you are interested in – watchlist, TV series, wanted movies, etc.
  • Automatically acquire or access media when available, including managing your media preferences – high definition (1080/720p), AC3, 320K bitrates, etc – and notify you when updates are available.

It might be possible to install and configure some free, open source products such as Plex Media Server, CouchPotato, SickBeard, HeadPhones, Transmission and SABnzbd to build a personal media network.


New growth or less risk

Recently I was catching up with a buddy that I worked with about 10 years ago, we were comparing notes on some things that are very different now compared to our experience during the late 90’s.  We quickly got to cloud computing and what a huge advantage is was not to have to burn tons of capital on gear to cover peak traffic.  We both rattled off a dozen other things that make it incredibly more simple to start a business and how many wildly successful very small, efficient technology businesses are emerging.  As usual, I like to throw Assembla into the mix at every opportunity because I think they’ve got something unique and valuable to add to the startup formula – basically, they’ve formulate a software as a service model for development environments.

Then, my friend said something that was different and got me thinking.  He said, “The barriers to starting a new business have been greatly reduced across the board in the last decade.  In fact, I’m starting to see some new problems.  Think of it like a mature rain forest that has thrown off a ton of new seeds.  At the forest floor, young seedling all look the same and struggle to get sun light.  I’m starting to see more Venture Capitalist move down the time line and wait for opportunities with more mature entrepreneurs.”  I wonder if the data backs up the theory that Venture Capital is transitioning to more mature companies now.  And if so, is it because the cost of entry is so low that the market is flooded?  Is it because folks are more risk adverse in a tight economy?

Reblog this post [with Zemanta]

Springpad

I don’t have a very good memory – and like most people with a family of five, I have many things going on.  Some routines are repetitive – medical appointments, school sports, pet related chores, grocery shopping and home maintenance.   Some tasks are personal health, well being and organizational – exercise, diet and Getting Things Done (GDT).   We also have an occasional big family events that take a massive amount of work, months, to coordinate – Bat Mitzvah, graduation party and college application/selection to name a couple.  Coordinating with everyone in my family is a big challenge – there are many moving pieces, late breaking changes and necessary communication.

Fortunately, my entire family is very technology oriented – everyone has a cell phone, email account, online calendar and our own social network; we utilize many of the typical online services such as shopping sites, travel portals, media sharing and financial management tools.  Things quickly get complicated when we spread these services across five people.  While we’re able to gain some efficiency within some of these individual services, many of them lack coordination across our social networks and they fail to roll up across our real life events in any type of comprehensive, manageable container. Over the years, we’ve tried many tools and techniques – paper, Covey organizers, refrigerator calendars, spreadsheets, Microsoft Outlook, Google docs, Evernote and Cozi; all come with some number of short comings, silo issues and lack of integrate actionable data.  Stress builds and my brain hurts.>

These problems are exactly what Spring Partners set out to address with Springpad.  My entire family shares an account that we use to remember stuff, integrate actionable data across other online services and leverage our trusted social connections as we manage our real life events.  We’re able to aggregate “My Stuff” in meaningful combination and coordinate calendar and communications across multiple channels – TXT, email and mobile interfaces to Springpad itself.  There is a light social network integrated within Springpad – the usual “follow” other Springpad user and includes the typical “share” your stuff with your Facebook friends and Twitter followers.  Springpad also attempts to solve the empty notebook problem by offering many pre-built, pre-organized applications around many common life events like getting organized, meal planning, maintaining a home, parenting, traveling, celebrating, exercising, learning and working.

I’ll use a simple example.  My plan in life was to simply enjoy wine, not ever invest in learning the elaborate details and subtle nuances that yielded good wine and more importantly wine that I like.  Yes, I planned to simply follow others in this life pleasure – ride the coat tails of others, lean on my friend’s investment in wine knowledge.  In general, this plan was working well.  But I still had the memory gap problem; how do I remember the great wine at last Friday night’s dinner party.  Back in the day, there was only good old paper and pen – not often in my pocket at a party and a frequent washing machine victim.  More recently when I’m able to sneak the smart phone past my wife’s what not to wear review, I’m able to electronically note the wine.  Still a pain to type name, vineyard and year – but if the wine is good, this was my only shot at remembering later.  So this was a perfectly sufficient solution – but I realized that I really like almost everything that certain friends liked.  Unfortunately, I’m not always with my friends when they’re drinking wine, so I’m not there to note the ever expanding collection.  I’m off the coat tail – FAIL.  How do I keep up with this important endeavor?

One of Springpad’s nicest applications is a Wine Notebook sponsored by Gary Vanynerchuk.  Using Springpad’s Wine Notebook, I am able to collect, organize, share with followers, include in my own plans and act on the information.  I am also able to follow my other trusted friends, who also share their favorite wines.  I am even able to see what Gary is recommending.  I am able to “spring” a wine into my collection and use it in many ways.  I can reference that info on my phone while at the store and I can use it later in my party planning notebook where I am keeping track of shopping list and things to do.  Even better, my friends can see what wines I like and bring it along when they come to my house for dinner.  I able to add comments to my wine data, categorize, note vineyard and pricing information or even attach a video presentation by Gary.

The power doesn’t stop there.  While, it is great that Springpad lets me continue riding wine enthusiast coattails, I think the real power is that I’m now able to reuse “My Stuff” in managing my other life events.  In planning my next Napa visit, I can reuse all this wine data to organize my vineyard tour, using a travel planner notebook, and work diligently to verify everyone’s comments on nose and palate.  I can search and include information on my travel schedule, car rental, hotel stay, restaurent reservations and local area friend meet ups.  I can keep track of details on my trip, include picture, videos, comments and share with my followers so that they can plan a similar time later.

Finally, less stress and fewer brain cramps.

Reblog this post [with Zemanta]