Recent advances in computer technology, be it in hardware or in software, have revolutionized the way researchers do science: problems that are too complex for human or analytical solutions are now easier to address; problems that would take years to solve can now be unraveled in days, hours, or even seconds. The use and development of advanced computing capabilities to analyse and solve scientific problems, also known as computational science, has undoubtedly played a key role in transformational scientific breakthroughs of our last century, making progress possible in many different disciplines.Elizabeth Hawkins, “A dedicated home for computational science” at Nature (April 21, 2020)
Of course computers can contribute to science breakthroughs. But as Gary Smith pointed out recently here at Mind Matters News, they are also a great way to get wrong or meaningless answers. That fact contributes significantly to the replication crisis in science:
Computer algorithms are terrible at identifying logical theories and selecting appropriate data to test these theories but they are really, really good at rummaging through data for statistically significant relationships. The problem is that discovered patterns are usually coincidental. They vanish when tested with fresh data—a disappearing act which contributes to the replication crisis that is undermining the credibility of scientific research. A 2015 survey by Nature, one of the very best scientific journals, found that more than 70 percent of the researchers surveyed reported that they had tried and failed to reproduce another scientist’s experiment and more than half had tried and failed to reproduce some of their own studies!
When I was a young assistant professor at Yale, one of my senior colleagues, Nobel Laureate James Tobin, wryly observed that the bad old days when researchers had to do calculations by hand were actually a blessing. The effort was so great that people thought hard before calculating. They put theory before data. Today, with terabytes of data and lightning-fast computers, it is too easy to calculate first, think later.Gary Smith, “Computers excel at finding temporary patterns” at Mind Matters News
No time like the launch of a new journal to take this type of problem on and take it seriously. Indeed, it’s reassuring to hear:
Many of the problems that computational science tackles today affect millions of people, which makes it integral to ensure that the complex computational analyses result in conclusions that are trustworthy and actionable. Nature Computational Science will champion the reproducibility of scientific outcomes, ensuring that articles meet the highest standards of reproducibility and transparency in reporting.Elizabeth Hawkins, “A dedicated home for computational science” at Nature (April 21, 2020)
The key word here is reproducibility. When the computer announces that bitcoin returns relate to “stock returns in the beer industry” we know something is wrong. But why does it keep happening?
It would also be interesting to see some thoughtful articles on philosophical and social topics, for example:
– comparing the Turing Test with the Lovelace Test for computer intelligence.
– looking at the limitations of artificial intelligence before we commit more to it than it can do.
– forecasting how warfare might change if AI drones are doing most of the fighting
– looking at the actual social changes that may accompany self-driving cars
and, just every now and then,
– poking a bit of fun at the far out stuff that the public is encouraged to take seriously, like the full self-driving taxis that are supposedly just around the corner or the AI that can have mystical experiences.
Good luck to the journal. We’ll see what happens.
You may also enjoy:
Which is smarter? Babies or AI? Not a trick question.
Lovelace: The programmer who spooked Alan Turing. Ada Lovelace understood her mentor Charles Babbage’s plans for his new Analytical Engine and was better than he at explaining what it could do.