“Hacking” is what I do to my own phone to make it work better. What you are talking about is not really hacking, even if hackers sometimes do it.
What you are talking about is unauthorized access and theft. Your question would be better worded as ‘can someone break into my phone and steal my data’? I think it’s important to remember that we’re talking about crime here, not about hacking.
If someone gets a hold of your phone then they can do what they like to it assuming they have the requisite knowledge. If the phone is out of your hands then you can’t know what’s gone on with it. I think you’re question is more specifically about remote access, though.
Remotely attacking a computing device requires a few things. Unauthorized access has pretty much the same prerequisites as authorized access. Your phone needs to have at least one working wireless interface, whether wifi or otherwise. It needs to be running at least one service (that is, a program which listens to the interface and responds to messages, like an FTP server or the like) and the attacker needs an exploit (a way of manipulating the service in such a way as to attack your phone).
Your phone can be accessed by the outside world in two ways:
1— it can run a service, accepting connections from other hosts;
2— it can initiate a connection to another host (for example, requesting a web page) and receive data from another host in response.
Both kinds of connections are subject to being intercepted by an attacker who pretends to be the intended other host. This is called a “Man in the Middle” or MITM attack.
All phones I know about run at least one service. They have to do so in order to work as a telephone, for one thing.
All smartphones I know about have a web browser. Web browsers can be exploited in different ways. Javascript is a particularly attractive attack vector.
Android and Apple have two different approaches to security. Google sits on discovered vulnerabilities for a while, and then releases them along with the patch that closes the vuln. Other contributors (like AOSP or manufacturers) might disclose the existence of a vuln before a patch exists, in order that other developers might guard against it and help in developing the patch. Apple, as far as I know, does not disclose the existence of vulnerabilities in iOS at all, and updates / patches are released as binary blobs only. (I don’t have an iDevice so I don’t know for sure.)
If this is true about Apple, then it’s a case of what NCBS calls “security through obscurity”. This is a common move among security non-experts, but results in worse security over the long run, because the most secure system is the one that successfully resists the widest variety of attacks. Anyone can design a security system that the designer can’t beat- it takes openness to make a system no one can beat. (Or so we think- no one’s done it yet ;^)
Obscurity increases security only under two conditions:
1) While the “vulnerability escrow” authority (in this case Apple) is the only one who ever knows about the vuln (meaning no one else ever independently discovers it);
2) While the authority is 100% trustworthy and leak-proof. How much do you trust Apple? Surely there’s never been a leak of information from that company, right?
@gambitking It’s easily possible for a computer to be compromised without the attacker knowing it. There are botnets out there with millions of nodes; there’s no reason to think that any of those users knows that their machine is compromised.
tl;dr: Yes, Virginia, your phone (or other computing device) can be hacked. It happens all the time. Get a firewall, and be careful.