Saturday, November 22, 2014

Haskell class, so far

Well, I'm about 5 weeks into Introduction to Functional Programming, a.k.a. FP101x, an online class taught in Haskell by Eric Meijer. The class itself is a couple weeks ahead of that; I'm lagging a bit. So, how is it so far, you ask?

The first 4 weeks covered basic functional concepts and how to express them in Haskell, closely following chapters 1-7 of the book, Graham Hutton's Programming in Haskell:

  • Defining and applying functions
  • Haskell's type system
    • parametric types
    • type classes
    • type signatures of curried functions
  • pattern matching
  • list comprehensions
  • recursion
  • higher-order functions

Haskell's hierarchy of type classes is elegant, but some obvious things seem to be missing. For example, you can't show a function. But, it would be really helpful to show something like a docstring, or at least the function's type signature. Also machine-word sized Int's don't automatically promote, so if n is an Int, n/5 produces a type error.

Most of the concepts were familiar already from other functional languages, Scheme via SICP, OCAML via Dan Grossman's programming languages class, and Clojure via The Joy of Clojure. So, this early part was mostly a matter of learning Haskell's syntax.

Some nifty examples

  • a recursive definition of factorial:

    factorial :: Integer -> Integer
    factorial 0 = 1
    factorial n = n * (factorial (n-1))
  • sum of the first 8 powers of 2:

    sum (map (2^) [0..7])
  • a recursive definition of map:

    map :: (a -> b) -> [a] -> [b]
    map f [] = []
    map f (x:xs) = f x : map f xs
  • get all adjacent pairs of elements from a list:

    pairs :: [a] -> [(a,a)]
    pairs xs = zip xs (tail xs)
  • check if a list of elements that can be ordered is sorted by confirming that each pair of elements is ordered:

    sorted :: Ord a => [a] -> Bool
    sorted xs = and [x <= y |(x,y) <- pairs xs]

Haskell attains its sparse beauty by leaving a lot implied. One thing I figured out during my brief time with OCAML also seems to apply to Haskell. Although these languages lack the forest of parentheses you'll encounter in Lispy languages, it's not that the parentheses aren't there; you just can't see them. A key to reading Haskell is understanding the rules of precedence, associativity and fixity that imply the missing parentheses.

Pre- cedence Left associative Non- associative Right associative







*, /, `div`, `mod`, `rem`, `quot`






==, /=, <, <=, >, >=, `elem`, `notElem`






>>, >>=


$, $!, `seq`

Another key is reading type signatures of curried functions, as currying is the default in Haskell and is relied upon extensively in composing functions, particularly in the extra terse "point-free" style.

Currently, I'm trying to choke down Graham Hutton's Addendum on Monads. If I end up understanding that, it'll get me a code-monkey merit badge, for sure.

Tuesday, November 11, 2014

The DREAM / RECOMB Conference 2014

The RECOMB/ISCB Conference on Regulatory and Systems Genomics, with DREAM Challenges and Cytoscape Workshops is running this week in San Diego.

A bunch of us from Sage Bionetworks are here to connect with the DREAM community. In introductory remarks, Stephen Friend framed the challenges as piloting new modes of collaboration and engagement addressing multidimensional problems based on the idea that open innovation will trump closed silos.

Lincoln Stein: The Future of Genomic Databases

I first heard Lincoln Stein speak at an O'Reilly conference in 2002, on building a bioinformatics nation. The same themes of openness and integration reappeared in Stein's talk on The Future of Genomic Databases.

Stein asks, "Open Data + open source = reproducible science?" Not exactly. Stein presents some emerging solutions to the remaining obstacles: big data sets, complex workflows, unportable code and data access restrictions.

Cloud computing, specifically colocation of data and compute, enables handling big data. Containers (ie Docker) address the problem of code portability. The Global Alliance is working towards providing APIs both to encapsulate technical complexity and to provide a control point at which to enforce restrictions.

In case we're wondering what to do with all the machine cycles made available by Amazon and Google, bioinformatics workflows are growing in complexity. Workflow managers like Seqware and Galaxy provide a formalized description of multistep processes and manage tools and their dependencies.

Legal restrictions hinder data integration. But, donors want their samples to contribute to research. Licensure for data access combined with uniform consent could reduce the friction resulting in a streamlined data access process. On the other hand, technical solutions involve homomorphic encryption and agent based federated queries.

As a parting thought, Stein notes that digital infrastructure enables experiments in incentive structures and economic models, citing micropayments, ratings, and challenges.

Andrea Califano

Andrea Califano spoke on the genotype to phenotype linkage in cancer. Thinking of the cell as an integrator of signals, Califo's group traces from gene or protein expression signatures of cell states (normal, neoplastic, metastatic) back through the network to the master regulators responsible for that signature. One related paper is Identification of Causal Genetic Drivers of Human Disease through Systems-Level Analysis of Regulatory Networks.

Paul Boutros Somatic Mutation Calling Challenge

Paul Boutros presented the Somatic Mutation Calling Challenge (SMC-DNA). He announced the intention for the SMC challenges to become a living benchmark, an objective standard against which future methods will be tested.

Paul also crowned the Broad Institute's MuTect (single nucleotide) and novoBreak (structural variants) by Ken Chen's lab at MD Anderson the winners of the synthetic tumor phase of SMC-DNA. The plan is to announce winners on real tumor data in February after experimental validation.

The Winners

The SMC challenge is a bit unique for DREAM in its level of specialization. In the other challenge, a couple of methods were highlighted: Gaussian process regression and dictionary learning for sparse representation.

But, increasingly, the main differentiator is application of biological domain knowledge, especially with respect to selecting and processing features. Li Liu of Arizona State's Biodesign Institute, for example, won part of the Accute Myoloid Leukemia challenge by weighting proteins based on their evolutionary conservation.

Another theme is that genetic features seem to have poor signal compared to more downstream features, gene expression or clinical variables. Peddinti Gopalacharyulu, a top performer in the Gene Essentiality Challenge, commented that perhaps the way to use genetics is to extract the component of gene expression that is not explained by genetic features.


Two of the Dream 9.5 challenges are follow-ups to the Somatic Mutation Calling challenge from the 8.5 round. The SMC empire expands into RNA and tumor heterogeneity. In the olfaction challenge, the goal is to predict, from molecular features, odor as described by human subjects. The Prostate cancer challenge asks participants to classify patients according to survival using data sourced from the comparator arms of clinical trials.

For the DREAM 10 round, there's an imaging challenge in the works and a sequel to the ALS challenge challenge from DREAM 7.


That's just the DREAM part of the meeting, or, really, the subset that fit into my brain. As an added bonus, there were several representatives from Cytoscape-related projects and some conversation about the Global Alliance for Genomics and Health.

Tuesday, October 14, 2014

Let's learn us a Haskell

Let's say you've been meaning to learn Haskell for a long time, secretly yearning for purely functional programming, laziness and a type system based on more category theory than you can shake a functor at.

Now's your chance. Erik Meijer is teaching an online class Introduction to Functional Programming on edX, about which he says, "This course will use Haskell as the medium for understanding the basic principles of functional programming."

It starts today, but I've gotten a head start by working through the first few chapters of Learn You a Haskell for Great Good! which most agree is the best place to get started with Haskell.

Anyone up for a Seattle study group?



Other resources

It must be some pack-rat instinct that makes me compile these lists.

Thursday, July 03, 2014

Galaxy Community Conference

The 2014 Galaxy Community Conference (GCC2014) wrapped up yesterday at Johns Hopkins University in Baltimore. The best part for me was the pre-conference hackathon.

If you're not familiar with Galaxy, it's a framework for wrapping command line bioinformatics tools in a web UI. On top of that, Galaxy adds lots of sophistication around job scheduling, configurable to run its job on SGE, SLURM, and other queuing systems and to run on virtual clusters with Cloudman. Galaxy users can design reproducible workflows and manage tool versioning and dependencies through the Toolshed.

The project has attracted a vibrant community supported by an active Q&A site and an IRC channel. Vendors offer Galaxy appliances, cloud deployments and consulting.


For the hackathon, 40 developers gathered in James Taylor's new computing lab in one of the red brick buildings interspersed with lawns and groves of trees that make up the Hopkins campus. This was a great setting for an accelerated ramp-up on the Galaxy codebase. Out of 18 proposed projects, 9 got far enough to be called finished. I learned how to write tool wrappers and how to access the Galaxy API.

Photo credit - someone on the Galaxy team

Galaxy + Synapse

Specifically, I was there to put together some integration between Synapse and Galaxy, bulding on what Kyle Ellrott had already started. Reproducibile analysis, provenance and annotation are core concerns of both projects, so it seems like a good fit. After data exchange, which Kyle got working, exchanging provenance and metadata sound like logical next steps.

Galaxy histories should be able to refer to Synapse entities and receive annotations along with data objects.

Serializing Galaxy histories, workflows etc. out to Synapse might allow a usage model where a cloud instances of Galaxy can be spun up and shut down on demand, with all the important details preserved in Synapse in between times.

Portable provenance might be a longer term goal. It would be neat to be able to thread provenance through some arbitrary combination of data sources and analysis platforms. All of these provenance-aware platforms - Figshare, Dryad, Synapse, Arvados, Galaxy, etc - ought to be able to share provenance between themselves.

Next year

The whole event was well put together in general. Next year's Galaxy Community Conference is scheduled for 6-8 July 2015 in Norwich, England.

Tuesday, June 17, 2014

Ada's Technical Books

If you're in Seattle, you owe it to yourself to spend some time in Ada's Technical Books in my old neighborhood. It's on the top of Capitol Hill on 15th between Republican and Harrison.

The shop features books on all sorts of techy topics - science, math, engineering and, of course, computers along with various nifty maker-type gadgets and a cafe full of tasty treats. Geek heaven!

Thursday, May 08, 2014

How to grep git commits

From the There's-got-to-be-a-better-way Department, comes this tale of subterranian Git spelunking...

It all started because I lost some code. I had done my duty like a good little coder. I found a bug, wrote a test that reproduced it and filed an issue noting the name of the failing test. Then, some travel and various fire drills intervened. Memory faded. Finally, getting back to my routine, I had this bug on my docket. Already had the repro; should be a simple fix, right?

But where was my test? I asked Sublime to search for it, carefully turning off the annoying find-in-selection feature. Nothing. Maybe I stashed it. Do git stash list. Nothing that was obviously my test. Note to self: Use git stash save <message> instead of just git stash. I did git show to a several old stashes, to no avail. How am I going to find this thing?

A sensible person would have stopped here and rewritten the test. But, how much work are we willing to do to avoid doing any work? Lots, apparently.

So, I wrote this ridiculous bit of hackery instead. I call it grep_git_objects:

import os
import sys
import subprocess

if len(sys.argv) < 2:
    print "Usage: python <target>"

target = sys.argv[1]

hashes = []
for i in range(0,256):
    dirname = ".git/objects/%02x" % i
    #print dirname
    for filename in os.listdir(dirname):
        h = "%02x%s" % (i, filename)

        cmd = "git cat-file -p %s" % h
        output = subprocess.check_output(cmd.split(' '))
        if target in output:
            print "found in object: ", h
            print output


Run it in the root of a repo. It searches through the .git/objects hierarchy to find your misplaced tests or anything else you've lost.

You'd think git grep would do exactly that. Maybe it does and I'm just clueless. I hope so, 'cause otherwise, why would you have a git grep command? In any case, git grep didn't find my missing test. The above bit of Python code did.

Once I found it, I had the hash code for a Git object. Objects are the leaves of Git trees. So, I grepped again for the hash code of the object and found the tree node it belonged to, working my way up a couple more levels to the commit.

With commit in hand, I could see by the "WIP on develop" message that is was indeed a stash, which I must have dropped by mistake. Anyway, that was fun... sorta.

Tuesday, April 22, 2014

The damaging effects of hypercompetition

In Rescuing US biomedical research from its systemic flaws (PNAS 2014), authors Bruce Alberts, Marc Kirschner, Shirley Tilghman and Harold Varmus recount the ways in which research funding is broken, in particular for life sciences.

PI's pour vast amounts of time and energy into competing for a shrinking pool of funds. Pressure for results causes a shift to short-term thinking and engenders a conservatism poorly suited to producing breakthroughs. In much the same way as the corporate sector abandoned fundamental research in favor of product development, the current funding climate overvalues "translational" research over basic science. The situation is especially dire for young scientists facing the prospect of years as an underpaid and overworked post-doc with faint hopes of ever landing a faculty position. The internet is littered with goodbye-academia letters. [1 2 3 4 5 6]

The Alberts paper proposes a few solutions, some sensible and one in particular that doesn't make much sense to me.

Broadening career paths for scientists is a sound idea. In this respect, life science can borrow from information technology. Computer science departments and the tech industry have a long history of exchange. The barriers in biotech are higher, but the flow of people and ideas between academics and industry can only be a good thing. Probably the biggest factor in diversifying away from the shrinking tenure track, is matching expectation with reality.

Managing expectations seems more sensible than trying to match supply and demand by restricting entry into PhD programs based on the reasoning that the system is training too many PhDs. Isn't too many PhD's a good problem to have? Is educating people a bad thing? Scientifically trained people are a tremendous asset. If we're not using a valuable resource effectively, having less of that resource doesn't seem like an ideal solution. With some creative thinking, productive uses for that talent can and are being found.

I'm not convinced that the pyramid shaped structure of science is the problem. Harnessing the idealism, curiosity and naive overconfidence of youth works. What doesn't work in absence of perpetual growth is the expectation that everyone at the bottom of the pyramid will eventually be at the top. But, what's wrong with a system in which people are supported for a time on research grants then continue onto many diverse career paths?

Getting the mix of competition and camaraderie right can be the difference between a situation that's nurturing rather than toxic. Economics teaches that incentives matter. But, re-engineering science will require a nuanced understanding of what those incentives are, both in crass terms of money and prestige but also the more subtle psychology around freedom, inner drive and higher purpose.

Since nobody asked me

If I was in charge of funding science, I'd place the majority of my bets on longer term grants for small focused labs where the PI does hands-on science and training of new scientists. Longer term grants of 5 years or so would give researchers time to do actual science in teams of 2-8 people. The article recognizes the success of the Howard Hughes Institutes model of explicitly encouraging scientists to pursue risky ideas - "the free play of free intellects".

I'd be reluctant to fund 40 person labs because neither mentoring nor creative thinking scales up to that size particularly well. I'd avoid large multi-center grants. Funding agencies seem to favor this sort of thing perhaps as a safer bet, they're a recipe for infighting and politicking. They risk getting all the disadvantages of both collaboration and competition with little of the benefits.

Research has a great track record of paying off in the long run. But, Bill Janeway, author of Doing Capitalism in the Innovation Economy says that there has to be a surplus in the funding system to absorb the unavoidable costs of uncertainty, both in terms of when the payoff comes and who captures the gains. In the past, this surplus has come from monopolistic corporations like Bell or Xerox, from the taxpayer, or from flush venture capital. In The Great Stagnation, Tyler Cowen posits that we've picked the low-hanging fruit of the current wave of technology - that we're in a period of stagnation while the last wave is fully digested and the stage can be set for the next wave. To that end, maybe some hedge-fund wizard can cook up some financial innovation for reliably financing risky, long-term projects with unevenly distributed payouts.

[1] Goodbye academia, I get a life
[2] Goodbye Academia
[3] Goodbye academia? Hello, academia
[4] The Big Data Brain Drain
[5] Why So Many Academics Quit and Tell
[6] On Leaving Academe
[7] Science: The Endless Frontier, Vannevar Bush, 1945
[8] The Decline of Unfettered Research, Andrew Odlyzko, 1995