The Third Magic: AI
I like this definition of science from this Noah Smith article: in essence, the ability to come up with a generalizable & simple prediction model. For example, to calculate where an artillery shell will fall, we go back to Newton’s laws of physics and its consistently predictable. However, for messier things than the natural world like human language, there aren’t simple models and thus people argue that AI is the only way: take masses of data to form unintelligble models.
A big knock on AI is that because it doesn’t really let you understand the things you’re predicting, it’s unscientific. And in a formal sense, I think this is true. But instead of spending our effort on a neverending (and probably fruitless) quest to make AI fully interpretable, I think we should recognize that science is only one possible tool for predicting and controlling the world. Compared to science, black-box prediction has both strengths and weaknesses.
https://open.substack.com/pub/noahpinion/p/the-third-magic?r=q167&utm_campaign=post&utm_medium=email
The challenge, as this article points out, is what are the implications if we can’t understand the models? How good are our predictions really? Or, more importantly, how can we know those predictions are off? ChatGPT is a good example of this as it sound authoritative (and often is) but its not easy to know if its accurate.