Tuesday, April 22, 2014

The damaging effects of hypercompetition

In Rescuing US biomedical research from its systemic flaws (PNAS 2014), authors Bruce Alberts, Marc Kirschner, Shirley Tilghman and Harlod Varmus recount the ways in which research funding is broken, in particular for life sciences.

PI's pour vast amounts of time and energy into competing for a shrinking pool of funds. Pressure for results causes a shift to short-term thinking and engenders a conservatism poorly suited to producing breakthroughs. In much the same way as the corporate sector abandoned fundamental research in favor of product development, the current funding climate overvalues "translational" research over basic science. The situation is especially dire for young scientists facing the prospect of years as an underpaid and overworked post-doc with faint hopes of ever landing a faculty position. The internet is littered with goodbye-academia letters. [1 2 3 4 5 6]

The Alberts paper proposes a few solutions, some sensible and one in particular that doesn't make much sense to me.

Broadening career paths for scientists is a sound idea. In this respect, life science can borrow from information technology. Computer science departments and the tech industry have a long history of exchange. The barriers in biotech are higher, but the flow of people and ideas between academics and industry can only be a good thing. Probably the biggest factor in diversifying away from the shrinking tenure track, is matching expectation with reality.

Managing expectations seems more sensible than trying to match supply and demand by restricting entry into PhD programs based on the reasoning that the system is training too many PhDs. Isn't too many PhD's a good problem to have? Is educating people a bad thing? Scientifically trained people are a tremendous asset. If we're not using a valuable resource effectively, having less of that resource doesn't seem like an ideal solution. With some creative thinking, productive uses for that talent can and are being found.

I'm not convinced that the pyramid shaped structure of science is the problem. Harnessing the idealism, curiosity and naive overconfidence of youth works. What doesn't work in absence of perpetual growth is the expectation that everyone at the bottom of the pyramid will eventually be at the top. But, what's wrong with a system in which people are supported for a time on research grants then continue onto many diverse career paths?

Getting the mix of competition and camaraderie right can be the difference between a situation that's nurturing rather than toxic. Economics teaches that incentives matter. But, re-engineering science will require a nuanced understanding of what those incentives are, both in crass terms of money and prestige but also the more subtle psychology around freedom, inner drive and higher purpose.

Since nobody asked me

If I was in charge of funding science, I'd place the majority of my bets on longer term grants for small focused labs where the PI does hands-on science and training of new scientists. Longer term grants of 5 years or so would give researchers time to do actual science in teams of 2-8 people. The article recognizes the success of the Howard Hughes Institutes model of explicitly encouraging scientists to pursue risky ideas - "the free play of free intellects".

I'd be reluctant to fund 40 person labs because neither mentoring nor creative thinking scales up to that size particularly well. I'd avoid large multi-center grants. Funding agencies seem to favor this sort of thing perhaps as a safer bet, they're a recipe for infighting and politicking. They risk getting all the disadvantages of both collaboration and competition with little of the benefits.

Research has a great track record of paying off in the long run. But, Bill Janeway, author of Doing Capitalism in the Innovation Economy says that there has to be a surplus in the funding system to absorb the unavoidable costs of uncertainty, both in terms of when the payoff comes and who captures the gains. In the past, this surplus has come from monopolistic corporations like Bell or Xerox, from the taxpayer, or from flush venture capital. In The Great Stagnation, Tyler Cowen posits that we've picked the low-hanging fruit of the current wave of technology - that we're in a period of stagnation while the last wave is fully digested and the stage can be set for the next wave. To that end, maybe some hedge-fund wizard can cook up some financial innovation for reliably financing risky, long-term projects with unevenly distributed payouts.

[1] Goodbye academia, I get a life
[2] Goodbye Academia
[3] Goodbye academia? Hello, academia
[4] The Big Data Brain Drain
[5] Why So Many Academics Quit and Tell
[6] On Leaving Academe
[7] Science: The Endless Frontier, Vannevar Bush, 1945
[8] The Decline of Unfettered Research, Andrew Odlyzko, 1995

Friday, April 11, 2014

Clojure Koans

In an attempt to reach a bit higher plane of enlightenment with respect to Clojure, I did the Clojure Koans. What a great way to get familiar with a new language.

It might be worth watching the video solutions: Clojure Koans Walkthrough in Light Table.

Thursday, February 20, 2014

Regression with multiple predictors

Now that I'm ridiculously behind in the Stanford Online Statistical Learning class, I thought it would be fun to try to reproduce the figure on page 36 of the slides from chapter 3 or page 81 of the book. The result is a curvaceous surface that slices neatly through the data set.

I'm not sure what incurable chemical imbalance explains why such a thing sounds like fun to me. But let's go get us some data, specifically the Advertising data set from the book's website:

Advertising <- read.csv("http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv", 
    row.names = 1)

Previously, we fooled around with linear models with one predictor. We now extend regression to the case with multiple predictors.

\[ \hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1 + \hat{\beta}_2 x_2 +···+ \hat{\beta}_p x_p \]

Or, in matrix notation:

\[ \hat{Y} = X \cdot \hat{\beta} + \hat{\beta_0} \]

Where the dimensions of X are n by p, and \( \beta \) is p by 1 resulting in a Y of size n by 1, a prediction for each of the n samples.

Find a fit for Sales as a function of TV and Radio advertising plus the interaction between the two.

fit2 = lm(Sales ~ TV * Radio, data = Advertising)

Before making the plot, define a helper function to evenly divide the ranges. This probably exists somewhere in R already, but I couldn't find it.

evenly_divide <- function(series, r = range(series), n = 10) {
    c(r[1], 1:n/n * (r[2] - r[1]) + r[1])
}

Generate a 2D grid over which we'll predict Sales.

x = evenly_divide(Advertising$TV, n = 16)
y = evenly_divide(Advertising$Radio, n = 16)

Using the coefficients of the fit, create a function f that computes the predicted response. It would be nice to use predict for this purpose. From a functional programming point of view, it seems like there should be a function that takes a fit and returns a function equivalent to f. If such a thing exists, I'd like to find it.

beta = coef(fit2)
f <- function(x, y) beta[1] + beta[2] * x + beta[3] * y + beta[4] * x * y
z = outer(x, y, f)

I copied the coloring of the regression surface from the examples for persp. I'm guessing the important part is create a color vector of the right size to cycle through the facets correctly.

nrz <- nrow(z)
ncz <- ncol(z)
nbcol <- 100

# Create color palette
palette <- colorRampPalette(c("blue", "cyan", "green"))
color <- palette(nbcol)

# Compute the z-value at the facet centres
zfacet <- z[-1, -1] + z[-1, -ncz] + z[-nrz, -1] + z[-nrz, -ncz]

# Recode facet z-values into color indices
facetcol <- cut(zfacet, nbcol)

OK, finally we get to the plotting. A call to persp sets up the coordinate system and renders the regression surface.

With the coordinate system we got in the return value, we plot the line segments representing the residuals, using a higher transparency for points under the surface. We do the same for the actual points, so they look a bit like they're underneath the surface.

# Draw the perspective plot
res <- persp(x, y, z, theta = 30, phi = 20, col = color[facetcol], xlab = "TV", 
    ylab = "Radio", zlab = "Sales")

# Draw the residual line segments
xy.true = trans3d(Advertising$TV, Advertising$Radio, Advertising$Sales, pmat = res)
xy.fit = trans3d(Advertising$TV, Advertising$Radio, predict(fit2, Advertising), 
    pmat = res)
colors = rep("#00000080", nrow(Advertising))
colors[residuals(fit2) < 0] = "#00000030"
segments(xy.true$x, xy.true$y, xy.fit$x, xy.fit$y, col = colors)

# Draw the original data points
colors = rep("#cc0000", nrow(Advertising))
colors[residuals(fit2) < 0] = "#cc000030"
points(trans3d(Advertising$TV, Advertising$Radio, Advertising$Sales, pmat = res), 
    col = colors, pch = 16)

Fit surface

To be honest, this doesn't look much like the figure in the book and there's reason to doubt the information conveying ability of 3D graphics, but whatever - it looks pretty cool!