In this post, I’ll argue that scientific progress has significantly slowed over the last century, both at a per-scientist and absolute level, and then offer some speculative hypotheses about why this might have occurred.
On Measuring Scientific Progress
To begin with, it is necessary to make clear how we go about measuring scientific progress. Broadly speaking, there are two ways of quantitatively measuring this construct.
First, there is the subjective approach. This approach measures scientific progress during a time period by counting up the number of important events which occurred during that period or the number of important people who lived and made achievements during the period. What events and which people are important is determined by relevant experts.
This expert opinion can be measured in a variety of ways. For instance, you can ask experts to rate a list of potentially significant figures or events, you can analyze the frequency with which individuals or works are cited within expert material, you can analyze the frequency with which individuals are included in encyclopedias, or the amount of space they are given in such works, etc.
Obviously, this approach will not work if experts do not agree on which events/people were important, but, as it turns, out, there is a high degree of consensus between experts in a wide variety of fields.
This can be seen by looking at how different experts (both individual experts and groups of experts) rank the relative importance of individuals across various methods, including having people give direct ratings, looking at how frequently individuals are cited, or even analyzing the space each person is given in topic-specific encyclopedias. Such methods produce reliability coefficients of .86 for art and .94 for both philosophy and science. Even when comparing sources from different nations, including comparisons between western and non-western sources, there is a remarkable degree of agreement (Eysenck, 1995; Murray, 2003).
So, there exists a set of people and events that experts consider to be important and there is a rank order of these people and events that experts largely agree on, and this rank ordering does not appear to be culturally biased. We can use these lists to measure scientific progress by comparing how many important people and events occur during different times or in different places.
The second approach is, in some sense, a more objective one. It consists of looking at some measure of technological productivity, say the yields of crops or the speed of computers, and looking at how these things have changed with time. This approach leaves less room for bias, but has two significant drawbacks.
First, this approach is very narrowly focused. For instance, if we were looking at field crops and measuring the impact of the invention of electricity, electricity would only be valued to the degree that it increased crop yield, an obvious underestimation of its value, and the invention of airplanes, or the theory of general relativity, might not be detected at all.
Second, this sort of measure is only sensitive to quantitative changes along a constant measure. If technology fundamentally changes there may be no one number that can measure the productivity of both systems meaningfully.
This approach is also obviously limited to technology and cannot measure progress in other, more basic, areas of science.
Given these limitations, and the fact that the subjective approach yields reliable and seemingly non-biased results, I prefer the “subjective” approach over the “objective” one.
I should also mention that looking at patent rates is probably not a good measure of scientific innovation. It is a measure of how many new ideas are produced, but it is not necessarily a good measure of how many good ideas are produced. Granted, these things will correlate, and so patent rates are better than nothing, but if other measures are available they should be preferred.
Along with these quantitative approaches, there’s also obviously a qualitative approach which consists of looking at the history of science and personally judging the degree of progress that seems to have occurred. This method is far more susceptible to bias, but I think it’s still worth while because it allows for the detection of details that numbers will miss.
Importantly, with respect to longitudinal trends in scientific progress these methods all indicate basically the same thing, meaning that we don’t have to dwell too much on which method is best.
The Rise and Fall of Science
Turning to such trends, Murray (2003)‘s work tracks the frequency of “significant figures”, or individuals who made great contributions to knowledge, up to 1950. As can be seen, a decline in the frequency of such individuals began in the late 19th century.
We see the same trend when looking at the rate of significant innovations in a data set that extends to 2005. And, as can be seen, the decline has become more dramatic since 1950, where Murray’s data-set ended.
Of course, each generation does achieve at least a few significant scientific breakthroughs. Research which has scientists rank the importance of Nobel prize winning discoveries suggests that the best ideas of the 1970s were better than the best ideas of the 1980s. Ideas from the 1990s and 2000s weren’t included in the data set because almost none of them have been given Nobel prizes, a possible sign of even steeper decline.
Thus, these numbers indicate that the rate of great ideas is declining, and among those great ideas which do manage to be produced, there is also a decline in quality.
Now, some people may be surprised to hear this given the obvious progress that has been made in certain technological fields. For instance, they may note the growth in computational power over the past few decades as evidence of great scientific progress.
However, this data obscures the fact that the number of researchers working on computational power has been increased by a factor of 25 over this period in order to maintain this growth rate.
Across a variety of technological fields, we can see a decline in growth once we control for the number of people working in each field. It can also be seen that this decline began, at the latest, in the 1950’s.
As we’ve seen, we’ve been able to maintain the absolute rate of growth in certain technological fields by massively increasing the number of people working on these technologies. On more subjective grounds, I think it is reasonable to say that, despite massively increasing the number of people working in academia, we’ve seen an absolute decline in the rate of major theoretical progress in most academic fields.
By theoretical progress, I mean the production of ideas that change fields in a fundamental way and which come to be nearly universally accepted by people in the relevant fields. I am referring to something similar to what Kuhn famously called a “paradigm shift”.
Wikipedia has a list of such paradigm shifts in the natural sciences, and they concord with my impression in that such great shifts largely stop in the third quarter of the 20th century.
Wikipedia doesn’t offer a very complete list for other fields, but I think my impressions in this regard are reasonably unbiased. (Of course, I’d probably think this even if my impressions were biased. So this impression is probably worthless.)
In psychology, the typical list given for this sort of thing would stop with Humanistic psychology and the Cognitive revolution, thus ending in the 1950s. In economics, a typical list would probably end with Keynesianism, Monetarism, and maybe a few other theoretical developments (e.g. rbct) but would probably not extend past the 1970s. In statistics, such a list would end with things like pathway analysis, significance testing, and meta-analysis, and so largely again would end in the mid 20th century. In philosophy, a list of paradigm shifts would probably end with post-modernism, existentialism, and analytic philosophy, all of which occurred prior to 1970. People I’ve spoken to have told me that the same is true of fields I know less about, such as linguistics and history.
Of course, this is not to say that there’s been no progress in these fields. But the progress that has happened has largely either been incremental progress where details at added to pre-existing paradigms, or potential new paradigms that fail to gain widespread acceptance (e.g. evolutionary psychology).
I’m also not making the claim that there have been literally no paradigm shifts since the 1970s. But I am saying that it seems highly probable that such shifts have become much rarer.
So, I think there is a real phenomena to be explained here, evidenced by narrative accounts, measures of technological productivity, and counts of important scientific contributions. These methods are all imperfect, but that they all converge on the same conclusion renders that conclusion more probable than its negation in the absence of some other, equally powerful, evidence pointing in the other direction.
The Inevitable Fall Hypothesis
Many people attempt to explain the falling rate of scientific progress by saying that scientific progress will inevitably slow with time because people first solve the easiest problems in a given science meaning each successive generation has to solve ever harder problems.
Revisiting charts I’ve already elaborated on, it can be seen that this was clearly untrue for almost all of human history.
So a proponent of this view needs to explain why it is that this inevitable slowing of science only became manifest within the last century.
We’re also owed an explanation of why it is that this inevitable slowing showed itself at roughly the same time in nearly every scientific field. The fact that this slowing occurred in both very old fields like physics and biology as well as much newer fields like psychology and statistics makes these trends even harder to explain using this hypothesis since it is improbable that various fields would, by random chance, be born at the right time for them to all run out of “easy problems” within a few decades of each other.
So far as I know, advocates of this view have failed to offer satisfactory answers to these points. Of course, this is not to say that there is no truth in the idea that scientific problems have become harder with time. But it seems obvious that this trend is normally not the main determinant of scientific progress because it is overcome by other more important factors. Certainly, this was the case for most of history, and someone who thinks that this has recently become the dominating factor needs to provide evidence showing that this is so.
Granted, this kind of historical analysis is always going to be speculative, but I think there are better theoretical accounts available which, at the very least, should be taken in conjunction with the supposedly increasingly difficulty of scientific problems in order to give a full explanation.
Explaining Science’s Rise
To explain why I think scientific progress has declined it is necessary to first explain why I think scientific progress exploded a few hundred years ago.
In Murray’s data set, we see that the rate of scientific progress increased radically between the years 1400 and 1600.
National Correlates of Science
Murray also conducts a regression showing that rich countries with non-authoritarian governments are more likely to produce great science.
Unsurprisingly, there is also a link between national intelligence, especially the intelligence of the elite, and national rates of scientific progress. Rindermann (2018) finds that the intelligence level of the 95th percentile of a nation correlates at .76 with its scientific production as measured via various metrics including noble prize winnings and the production of highly cited scientific articles.
Individualism is another important national correlate of scientific productivity. For instance, Gordonichenko and Ronald (2012) measured innovation by comparing national patents per person, the size of the advanced technology industry in a nation, the share of GDP taken up by royalty and licensing fees, and the number of citations in scientific and technical journals a nation produces. They found that more individualistic nations tended to have higher levels of innovation and by several of these measures the association was quite large with individualism explaining over 40% of national variation in several innovation metrics.
This makes good theoretical sense given that scientific innovation is often an act of nonconformity and the result of great individual drive.
The last national correlate I want to mention is Protestantism. As I’ve argued in a previous post, Protestantism increases national levels of education, democracy, and wealth, and therefore sets up the national conditions in which science tends to prosper while also encouraging cultural attitudes which are antagonistic towards centralized authority.
While no doubt incomplete, a theory which says that science will flourish in nations which are liberally governed, individualistic, protestant, and highly intelligent, gives us substantial help in explaining why science exploded in north western Europe, and especially England, around the year 1600.
Science is Done by a Few People
Turning from national correlates of scientific progress to individual level analyses, the first thing to note is that science is largely done by just a few people.
A quantitative version of this observation was made famous by Alfred Lotka who, in the 1920s, noted that roughly 60% of physical scientists have only ever published one article.
Murray’s data confirms this observation. Unfortunately, Murray doesn’t provide a chart corresponding to this observation for science, but he says that the chart he gives for art basically looks the same as the ones he generated for the sciences.
Other work has shown that a small proportion of academic articles account for more than 80% of citations. This was especially true in the early 20th century, when less than 10% of articles accounted for 80% of citations.
Even today, in most fields it takes years for the average article to get even a single citation. In the humanities, less than 20% of produced papers are ever cited by anyone.
Earlier work in physics found that nearly half of scientific citations went to just 200 physicists, and that the top 10% of scientists were more roughly three times as productive as the bottom 50% (Cole and Cole, 1972; Dennis, 1955)
Thus, scientific progress is largely driven by a small number of scientists working in a small number of countries. Thinking of scientific ability through this frame, we see that it is very rare, and, as the historical picture makes clear, the norm is for there to be virtually none of it anywhere. The decline in scientific progress then can be seen as a return to normal, and so plausibly the result of the destruction of whatever conditions led to the previous period of great scientific productivity.
Psychological Correlates of Genius
With respect to the psychological correlates of genius, there are two possible methods of investigation. First, we can study the characteristics of scientists who, while not necessarily geniuses, are the best and most innovative scientists we can study. The idea here is that the differences between them and less noteworthy scientists are probably lesser versions of the differences between regular scientists and geniuses. The second approach is to study the biographies of known geniuses and estimate their psychological profile based on correlates with the trait under study that are likely to show up in someone’s biography.
The most well known study of the later type is Cox (1926). Cox analyzed the biographies of 301 eminent individuals and had multiple people use the same indexes to estimate the IQ of these individuals. The inter rater reliability was around .70, which is a degree of reliability that is typical of psychological tests.
The results suggest that eminent individuals posses IQs far above average, and this is especially true of scientists and philosophers. Notably, these IQ estimates are higher than 130, which is roughly the average IQ of scientists and mathematicians (Feist, 2014).
These two data points support the common sense viewpoint that a high degree of intelligence is required for genius, even if we have doubts about the precision of Cox’s estimates.
It is also clear that intelligence, while necessary for genius, is not sufficient. This is obvious since there are far more people with very high IQs than there are geniuses.
This is empirically attested to by Lewis Terman’s famous study of gifted children. Terman collected data on 1528 Californian children who had a mean IQ of 151. The lowest IQ of the bunch was 135. IQs of 170 or higher were found for 77 of the subjects. Relative to the general population, the rate of extreme success among this group was extraordinary. In a 35 year follow up, 77 of the participants had been included in American Men of Science, a list of America’s top scientists, and 33 were included in America’s Who’s Who (Eysenck, 1995). Eventually, two went on to win noble prizes (Feist, 2014). However, the vast majority of the 1,528 subjects studied did not go on to become geniuses, and so these results also make clear that most people with extremely high IQs do not go on to achieve a notable degree of eminence.
Turning away from intelligence, the largest difference between scientists and non-scientists in big 5 personality traits is in conscientiousness (d = .44). The correlation between the year of publication and the size of this gap was .58, meaning that with time the gap between scientists and non-scientists in conscientiousness has been rising quite rapidly. Moreover, when comparing creative and less creative scientists, the gap in conscientiousness was rather small (d = .17). Conscientiousness could plausibly help people do science, but this trait doesn’t well differentiate good and bad scientists, and it seems that scientists of the past, when science was more productive as a whole, were less selected for conscientiousness, and so I don’t think we should place too much theoretical stock in that correlate.
The traits that most strongly differ between creative and non-creative scientists are openness (d = .40), and confidence (d = .39). Using Eysenck’s three factor model of personality, it was shown that, compared to non-scientists, scientists score higher on psychoticism (d =.45) and extroversion (.33), and this difference in extroversion is driven by the dominance facet of extroversion rather than sociability.
Some may be unfamiliar with psychoticism, so it is worth noting that this personality factor is defined by being “aggressive, cold, egocentric, impersonal, impulsive, antisocial, unempathic, creative, and tough-minded”.
This is concordant with other research on the personality correlates of scientific achievement which have been summarized thusly:
“…success is more likely for those who thrive in competitive environments, that is for those who are dominant, arrogant, hostile, and self-confident. For example, Van Zelst and Kerr (1954) collected personality self-descriptions on 514 technical and scientific personnel from a research foundation and a university. Holding age constant, they reported significant partial correlations between productivity and describing oneself as “argumentative,” “assertive,” and “self-confident.
In one of the few studies to examine female scientists, Bachtold and Werner (1972) administered Cattell’s 16 Personality Factor to 146 women scientists and found that they were significantly different from women in general on nine of the 16 scales, including dominance (Factor E) and self-confidence (Factor O). Similarly, Feist (1993) reported a structural equation model of scientific eminence in which the path between observer-rated hostility and eminence was direct, and the path between arrogant working style and eminence was indirect but significant.
The scientific elite also tend to be more aloof, asocial, and introverted than their less creative peers. In a classic study concerning the creative person in science, Roe (1952, 1953) found that creative scientists were more achievement oriented and less affiliative than less creative scientists. In another seminal study of the scientific personality, Eiduson (1962) found that scientists were independent, curious, sensitive, intelligent, emotionally invested in intellectual work, and relatively happy. Similarly, Chambers (1964) reported that creative psychologists and chemists were markedly more dominant, ambitious, and self-sufficient, and had more initiative than their less creative peers. Helson (1971) compared creative female Mathematicians with less creative female mathematicians, matched on IQ. Observers blindly rated the former as having more “unconventional thought processes,” as being more “rebellious and nonconforming,” and as being less likely to judge “self and others in conventional terms.” Finally, Wilson and Jackson (1994) reported that both male and female physicists were more introverted and conscientious than nonscientist controls.” – Feist (2012)
This all makes good theoretical sense. To be recognized as a genius a person normally must think of a new idea and then convince people that old ideas are wrong. This is, by its very nature, a creative, non-conformist act which requires some degree of arrogance, and a great deal of persistence. On the whole, consistently telling everyone else that their wrong and you’re right could be described as anti-social. Certainly, being comfortable with being anti-social would make this process easier for someone to do.
Demographic Correlates of Genius
Reviews of people who have made great scientific contributions also suggest that they tend to come from middle class homes.
Religiously, Jews, atheists, and, to a lesser extent, protestants, are over represented among geniuses while Catholics, Muslims, and followers of Asian religions are under represented. Jews, Atheists, and Protestants, share an IQ advantage over neighboring religions, and are relatively non-conformist. Protestants also tend to set up democracies which, as we’ve seen, is a correlate of scientific progress.
We can also see that the vast majority of significant figures were Europeans born after the year 1600.
And most of the science done post-1600 was done by North Western Europeans, especially the English.
This is note worthy since this was not true of the (relatively scarce) scientific innovation done just prior to 1600.
Historically, something like 97% of geniuses have been white.
And roughly 98% of them have been men.
Thus, nearly all great figures in science have been white men.
Works of genius also tend to be done between the ages of 30 and 40. Plausibly due to intelligence.
There is also a strong tendency for geniuses to have poor or non-existent relationships with their parents. Eysenck (1995) summarizes this literature well:
“Eisenstadt (1978) studied 699 famous historical figures and found that one in four had lost at least one parent before the age of 10. By the age of 15 the loss had exceeded 34%, and 45% before the age of 20. These losses almost certainly exceed those suffered by the average citizen of those times, although estimates of life expectancy are hard to come by until quite recent times. We can compare these figures with those covering the beginning of the twentieth century, and find that death of mother or both parents by the age of 15 was three times more frequent in the sample of eminent people than in the general population. The conclusions of Albert (1980) were similar, for artistic and scientific achievers as for politicians and eminent soldiers. He found a threefold rate of parental loss, as compared with twentieth-century populations…
Goertzel and Goertzel (1962) studied 400 eminent historical figures and found that 75% of them had suffered broken homes, rejection by their parents, many of them over-possessive, estranged or dominating. More than one in four had a physical handicap. In a later study, Goertzel et al. (1978) found that 85% of 400 eminent people in the present century had come from highly troubled homes – 89% of the novelists and playwrights, 83% of the poets, 70% of the artists and 56% of the scientists. In a similar vein, Berry (1981) found that literary Nobel Laureates came more often from poor backgrounds, and suffered physical disabilities than did scientific Laureates…
Affection, attachment, warmth, closeness between creative achievers and their parents were certainly more likely to be absent in the scientists studied by Cambers (1964), Roe (1951a) and Stein (1962), the psychologists studied by Drevdahl (1964) and Roe (1953), and the architects studied by MacKinnon (1962b). Scientists in general were usually feeling distant from their parents, psychologists on the other hand experienced actual open hostility and rejection.”
Through a variety of mechanisms (direct causation, GxE correlations, etc.), these poor relations with parents might be related to the anti-social tendencies of geniuses.
Explaining the Science Boom
So, the model scientific innovator is a non-catholic white male who is smart and creative but who has a troubled history with their parents and thinks in somewhat strange ways and gets his best ideas sometime in his 30’s, and pushes ideas against the status quo in a non-conformist fashion.
Northwest Europe between the years 1600 and 1900 was fairly ideal for this sort of thing. For one, it was full of white males who weren’t Catholics. And the governments and general cultures were relatively liberal, and so allowed for strong dissent and innovation. There was also a culture which emphasized an intense degree of liberal education for upper class males giving them a great deal of information to work with by the time they were in their 30s.
Institutionally, science was also remarkably free. To become a respected science merely meant you had to produce good work and convince a few others of its value. And scientists were often able to make a living either by being directly paid by students, or by convincing a small number of rich individuals to fund them. This is important for people with new ideas, because it meant that they only needed to convince a small number of people of their worth in order to make a respectable living.
Explaining The Decline
With this explanation of scientific progress in mind, a partial explanation of scientific decline is immediately made obvious. It simply involves the undoing of every one of these factors which allowed for great scientific achievement in the first place.
Race, Sex, and Religion
Consider the demographics. Since the 1960s, advanced degrees have increasingly been giving to women, with them now making up the majority of degree earners in many fields. This is true in the US and around the world.
Notably, this increase in women academics may be partly driven by discrimination. Willams and Ceci (2015) sent applications to 2,090 STEM faculty members for academic positions. Across 5 experiments, they consistently found a 2:1 bias in favor of women.
Of course, we’ve also seen an increase in non-whites taking academic positions in historically white countries. The result of these forces combined is that today in America only 44% of college faculty are white males.
In American universities, we’ve also seen an increase in the rate of Catholic professors, though this has been offset by an increase Jewish professors as well (Lipset and Ladd, 1971; ).
Given how dominant white males have been in the history of scientific genius, it would be extremely difficult to replace white males with members of other groups with an equal likelihood of being geniuses. We have not even attempted to do this in a serious fashion and so it would be surprising if this demographic shit did not lead to a decline in the rate of genius among academics.
Intelligence and Creativity
At the same time, various lines of evidence suggest that genotypic IQ, and phenotypic general intelligence, declined over the last century or so.
|Dutton et al. (2016)||In recent decades, IQ scores have been declining in Norway, Denmark, Britain, The Netherlands, Finland, France, and Estonia.|
|Reeve et al. (2018)||Meta-analysis finds that IQ is negatively correlated with fertility at -.11|
|Nijenhuis et al. (2013)||The G loading of an IQ subtests correlates at -.38 with the degree to which it has risen as part of the Flynn Effect|
|Woodley et al. (2015)||Between 1890 and 1988, English reaction time increased by roughly 30 ms.|
|Woodley et al. (2013)||Between 1889 and 2004, reaction time in the western world increased enough to imply a 13.35 point reduction in G.|
|Woodley et al. (2016)||Since the 1980s, color discrimination ability (an extremely strong correlate of general intelligence) has declined enough to imply a 3.15 point loss per decade in G.|
|Woodeley et al. (2015)||The use of complex vocabulary has declined significantly since the 19th century|
|Twenge et al. (2019)||Despite mass increases in educational attainment, American vocabulary skills declined between 1974 and 2016|
|Woodeley et al. (2019)||Polygenic score analysis indicates that genotypic IQ is declining by at least .208 points per decade in the US.|
All over the world, it’s been seen that populations with lower IQs are reproducing at greater rates suggesting that the decline in genotypic IQ is a global phenomenon.
|Meisenberg (2009)||170 nations||-0.83|
|Kura (2013)||43 Japanese Prefuncts||-0.43|
|Lynn et al. (2015)||33 Indian States||-0.35|
|Lynn et al. (2015)||15 regions of Turkey||-0.84|
|Boutwell et al. (2013)||200 US Counties||-0.38|
|Templer et al. (2011)||50 US States||-0.35|
There’s also evidence suggesting that creativity has been declining.
However, some research suggests that this trend depends on the domain of creativity being looked at: “Overall, the analysis of the visual artworks indicates a rise in sophistication and complexity, as well as an increase in the number of works portraying a less conventional presentation of subjects (Table 1). By contrast, the analysis of the creative writing stories indicates a significant increase in young authors’ adherence to conventional writing practices related to genre and a trend toward more formulaic narrative style, though language is significantly more conversational, casual and invented (Table 2).” – Weinstein et al. (2014)
On the whole, it seems plausible that we’ve become less intelligent and less creative in ways that are relevant to scientific work. If this is so, it will have obviously contributed to our decline in scientific progress.
Expanding the Circle of “Experts”
This problem is plausibly compounded by the fact that we now send a huge number of people to college and so require that there be a huge number of academics to teach them. To accomplish this, we have to lower the bar of what is required in order to be an academic.
In another essay I provided evidence showing that most academics today can’t use their academic knowledge to make real world predictions, generally don’t understand the statistics upon which their research rests, and fail to spot elementary errors during the peer review process.
Because our population used to possess greater general intelligence and our population of academics used to be more selective, it seems plausible that academics of the past may have been more competent. This will obviously reduce the rate of innovation per scientist. It may also make the work of geniuses more difficult because the field which they most convince of their innovation now includes a greater number of incompetent individuals who may be relatively immune to the force of reason.
This problem may have been made worse by the rise of what I call conformist science.
The Rise of Conformist Science
By conformist science, I mean a scientific community in which a great deal of conformity is required in order to be a successful scientist. As I’ve reviewed, great scientists are generally non-conformist and an act of scientific innovation is inherently an act of non-conformity. Because of this, institutionalizing conformity in science seems likely to inhibit scientific progress.
The required degree of conformity was increased in western science in multiple ways over the last century of so. Today, before people can even become academics, they must finish graduate schools. The dissertation process involves dedicating years of one’s life to a question approved by an existing academic. This often means that aspiring academics can’t focus on anything too radical as their advisor will be unlikely to approve something which truly departs from the status quo.
This is probably more problematic than it at first seems for at least two reasons. First, the topic of one’s dissertation often defines the rest of their academic life and so this may to some degree constrain the boldness of one’s entire career. Second, geniuses often come up with their genius idea in their 30s following many years of study. Innovative insights may be lost if the focus of study in one’s 20s is constrained by existing academics, and the intelligence needed for such insights is diminished at later ages.
Of course, it is possible to get lucky and be able to do a dissertation on something truly groundbreaking, or to study material outside of school and then switch to a more interesting problem once one has become an academic. But the modern dissertation process creates an incentive structure which may make innovative work less likely than it otherwise would be.
Moreover, once one becomes an academic they are still under strong pressure to not say anything too radical until they’ve acquired tenure. This typically doesn’t occur until someone is in their late 30s or older. Prior to this, their career may be greatly hampered by seriously challenging anything fundamental in their field.
Even once one has tenure, for their work to be seen as respectable it must pass through peer review. Of course, the rise of peer review has meant that scientists must convince the scientists defending the status quo of the legitimacy of their work in order for it to even be published. (For more on peer review, see my essay on experts). It is an inherently conformist process.
This is all somewhat speculative. But it seems likely that scientific innovation has been hampered by more conformity being required to become an academic than what was required in the past and by the establishment of a system which incentivizes people to wait until middle age before attempting to do anything radically innovative.
“Science” as the Sole Authority
The final speculation I want to offer is that science has been made worse by the fact that educated people today regard the official scientific community as the sole source of scientific knowledge. Knowledge produced by others is disregarded because of who it was produced by.
This means that many would be geniuses can’t work around the problems of academia by simply doing their work in a different institutional context. To do so would be to volunteer to have one’s work not be taken seriously.
In conclusion, while we cannot be certain, the evidence suggests that scientific productivity has declined. Even less certainly, I don’t think this can be explained merely by saying that scientific problems are becoming harder. Rather, I think it is likely that demographic and institutional changes in both the general population and specifically within the scientific community have significantly contributed to this decline.