From a pessimistic perspective, malpractice seems rife in the academic world. Tilburg psychology professor Diederik Stapel made up research data; Fan Liu, a researcher at Erasmus University, worked with Uyghur DNA without clear ethical consent; and former rector of the University of Amsterdam, Dymph van den Boom, did not take it very seriously with citing the source in her work.
Yet little research was done into scientific integrity, Lex Bouter noticed in 2014, when he started as professor of methodology and integrity at the Free University (VU) in Amsterdam. Before that, the medical biologist had already been professor of epidemiology and rector at the VU. ‘After three weeks I had almost read all the literature. There was virtually no empirical research. Oops, I thought, isn’t this switch a mistake?’
Almost a decade later, that is different, he says in an office on the eleventh floor of the main building of the VU, overlooking the Nieuwe Meer and Schiphol. ‘The field has grown rapidly.’ That is what Bouter himself has contributed a lot. For the National Survey of Scientific Integrity from two years ago, he questioned thousands of scientists how clean-lined they were—with shocking results. He also contributed to the revised Dutch code of conduct for scientific integrity from 2017. The 66-year-old professor recently retired, so it’s time to list his six most important lessons.
1. Half of the scientists are wrong
‘About 4 to 8 percent of the researchers say they have made up or adapted the results in the last three years. That figure comes from dozens of questionnaire surveys, including my group’s. This type of fraud scandal attracts the most attention, such as with Diederik Stapel or Don Poldermans (the Rotterdam cardiologist who is said to have invented test subjects in a medical study, red.).
‘However, this type of fraud is not the biggest problem. That are the questionable research practices, questionable research practices. 30 to 70 percent of scientists are guilty of this. Roughly half cut corners. At least, that is what they themselves indicate anonymously in questionnaire surveys such as our National Survey. You can never be sure whether everyone fills in such a questionnaire honestly, so it may be an underestimate. An example of such a practice is that you try out analyzes on your data until one seems to show a connection. Or only report findings that are up your alley, or photoshop images. This compromises the research quality. That’s why I prefer to call it harmful research practices. ‘Questionable’ is too euphemistic for me.’
2. Perverse incentives ruin the system
‘Nobody goes into the investigation just to cheat, but in practice you have to navigate between difficult dilemmas. What is good for truth-finding is not always good for your career. Sound research takes time, and often the results are not as hoped. Such negative results are more difficult to get published in high-quality magazines, and you need those publications for your career. That increases the pressure to write things nicer than they are.
‘People are going to think that publishing a lot is the goal; doing good science fades into the background. In South Africa, where I often visited in recent years, I saw that up close. There, the government started paying universities per publication after the apartheid era. Well, that cost her a lot of money. The number of publications increased enormously. In the meantime, it appears that in some disciplines more than 40 percent of the articles contain plagiarism.’
3. Youth is the future
‘You have to assess scientists more broadly than just on the basis of the number of publications. What kind of feedback do you get in your coaching style? How are you as a teacher? Have you shared your datasets with others and then helped them? Have your findings been applied in society? That way you get a much more diverse picture.’
This is in line with ‘Recognize and Appreciate’, the university’s plan for a new assessment system. At the same time, the professors who determine who gets an appointment have themselves surfaced in the ‘perverse’ system, in which publications in prominent journals count in particular. And many universities ultimately refused to participate in Bouter’s survey of scientific misconduct.
‘The academic world is conservative. At the same time, I do see change, thanks to young people advocating for this, for example in De Jonge Akademie (the young variant of the KNAW science society, red.). Three former members are now rector or president of a university: Jeroen Geurts at the VU, Annelien Bredenoord in Rotterdam and Rianne Letschert at Maastricht University.
‘I also put my hopes in another way of publishing that is emerging now, the registered reports. You only send your research approach to a journal for assessment before you carry out the study. If it is approved, you can publish your results afterwards, whether they confirm or undermine your hypothesis. Journals and subsidy providers can force these kinds of changes, just as they do with the sharing of research data.’
4. Role models are crucial
The situation on the work floor is often unruly, especially in top labs where ambitious supervisors prefer to see groundbreaking results. Just go against that.
“Good role models are incredibly important. You have two types of supervisors, we saw in our survey. One group is of the type supervise for survival. They train people to become successful in science under the current rules of the game. They are the ones who say that you have to tweak statistics and analysis a bit to find the best possible results.
‘The other group is of the responsible supervisor type. That side says: science isn’t about you, it’s about finding the right answer. Our research also showed that people who had such a counselor committed fraud or exhibited questionable behavior much less often.’
5. Transparency, transparency, transparency
‘Scientists need to communicate better about how they work and about their mistakes and mistakes. Transparency is the only way forward, also internally: you have to have your data and analyzes checked by colleagues. Having people looking over your shoulder can be scary, but get used to it.
‘The dubious practices I talked about earlier have to be removed from the integrity investigations. Scientists should be able to acknowledge those mistakes without fear of being fired or reprimanded. The goal should be to learn from it and make sure it doesn’t happen again. This way of being able to report incidents ‘safely’ works like a charm. When the medical sector started to apply it, the number of medical mistakes fell sharply.
‘In practice you organize this with a quality committee that carries out internal audits, for example. I set up such a control system myself at the Amsterdam UMC at the Primary care and public health institute, when I started as professor of epidemiology. This is also used in other places, such as the Nivel health institute. Everyone in a department has a rotating job. For example, I was involved in an audit of a project that weighed toddlers across the country. How often do you actually check the reliability of those scales, I asked. Then it got quiet. They had never done that! It turned out that one scale indicated different values than the other.’
6. Whistleblowers are necessary, but not to be envied
‘Let it be clear: for serious misconduct such as data manufacturing and other outright fraud, integrity procedures remain necessary. But then someone has to ring the bell.
Whistleblowers are never popular, even if they are right. You have to stand strong. Complaining anonymously is sometimes also an option. Or go to an investigative journalist.
‘It goes further in the United States, where a scientist can go to jail. I’m not in favor of it: it hinders reporting and discussing minor incidents on a daily basis from learning from them. Of course, people should also be able to be subject to sanctions, but prevention is better than cure.’
Report the reality honestly
Many trials fail, that is the daily research reality. One way to show that image in publication is the so-called ‘registered report’. A scientific journal assesses a study as publishable on the basis of the study design, ie before the study has been carried out. While 96 percent of the regular articles examined in psychology journals reported positive findings (which confirmed the expected hypothesis), that percentage among comparable registered reports was 44 percent, a study from Eindhoven showed last year.
Three sensational cases
Diederik Stapel. ‘Perhaps the biggest scammer in academia’, said The New York Times Tilburg professor of social psychology. He first came up with conclusions (‘Meat-eaters behave more huffily than vegetarians’), and then filled in research questionnaires himself. Dozens of publications turned out to be fabricated. In 2011, three employees exposed his fraudulent practices, after which Stapel had to surrender his doctorate.
Lorenza Colzato. This Leiden professor of cognitive psychology made up research data, omitted co-authors from publications and had blood drawn from test subjects without ethical consent. She had committed fraud in at least fifteen articles. Which one exactly, the integrity committee did not want to disclose. She now continues to work as a researcher in Dresden.
Leo Kouwenhoven. In 2018, the Delft physics professor seemed to have discovered a special particle that could form the basis of a revolutionary quantum computer from Microsoft. In the accompanying Naturepublication, however, the authors had omitted data that undermined the discovery. ‘culpably negligent’ behaviour, the national integrity committee ruled.