Nothing at all, and in fact there's a site set up specifically to advocate for this: https://utf8everywhere.org/
The biggest problem is when you're working in an ecosystem that uses a different encoding and you're forced to convert back and forth constantly.
I like the way Python 3 does it - every string is Unicode, and you don't know or care what encoding it is using internally in memory. It's only when you read or write to a file that you need to care about encoding, and the default has slowly been converging on UTF-8.
The problem with "every string is Unicode" is if you want to represent things that look like unicode but aren't really guaranteed to be unicode. This includes filenames on Windows (WTF-18 aka arbitrary WCHAR sequences) and Linux (arbitrary byte sequences) that are interpreted as UTF-16 / UTF-8 for display purposes but limiting yourself to valid UTF-16 / UTF-8 means that you cannot represent all paths that you might come across.
The biggest problem is when you're working in an ecosystem that uses a different encoding and you're forced to convert back and forth constantly.
I like the way Python 3 does it - every string is Unicode, and you don't know or care what encoding it is using internally in memory. It's only when you read or write to a file that you need to care about encoding, and the default has slowly been converging on UTF-8.