General Question

zenzen's avatar

Can a computer start by itself and switch on the camera?

Asked by zenzen (4087points) September 21st, 2014

It was on sleep. I came back to it and it was both on and the IM camera was on.

There was no-one else in the house.

Could it have been activated remotely?

Observing members: 0 Composing members: 0

16 Answers

gorillapaws's avatar

@zenzen If it was asleep and not powered down, then I’m pretty sure the answer is yes.

johnpowell's avatar

Absolutely. It is actually just a few lines of code. This is why I tape the camera and mic on my computers.

zenzen's avatar

What would make the camera come on? I hadn’t used it even – so it started up from scratch.

LuckyGuy's avatar

Yes. Look at this question from about a month ago:. Is your pc camera blocked right now?

zenzen's avatar

Thanks guys

elmillia's avatar

No and no.

If the computer is suspended or sleeping then, it can revive itself. If the computer is off, then the only way it can start up is if the bios is set to return the computer to it’s on status when the power is restored.

As for your camera, by default…. No. None of the major operating systems include software for turning the camera on by default… but you may have software installed that’d do it.peeping tom malware can also do it.

But what I suspect happened, since you mentioned your IM client, is that the client recieved a request for a video chat, and may have automatically authorized it, opened a window and turned on the camera. It may have also brought the computer out of standby.

I would advise starting with the IM client and going over it’s settings very carefully. Also go over any drivers the camera uses, any software it might use, and then do a virus scan.

Don’t put tape over it as suggested above. There is no need. If you don’t use the camera and mic, you can go into your hardware manager for your operating system (In windows, you can do this via right clicking on computer in the start menu) and simply disabling the camera and mic.

But as long as you don’t have intrusive software installed (IM clients CAN be intrusive if not configured correctly. This includes Skype), you shouldn’t have a worry about it. Windows, Linux and MacOSX do not contain software to spy on you out of the box.

LuckyGuy's avatar

@elmillia Good advice. But, your last sentence is telling .“Windows, Linux and MacOSX do not contain software to spy on you out of the box.” That is the problem.

People pick up spyware from many sources. Email, links, etc. My favorite was the electronic picture frames, made in china, that came loaded wih kinds of spyware. People install the frame, automatically answer “yes” to the download software when they plug it in, and they were screwed.

My camera is not taped over. I made a tape flap that covers it so it stays clean.

rexacoracofalipitorius's avatar

To prevent your computer being woken from sleep remotely, look in your BIOS settings for something called “Wake on LAN” and turn that off. It might be filed under a category called something like “ACPI Settings”. Or it might not, I don’t know what your BIOS is like.

“Windows, Linux and MacOSX do not contain software to spy on you out of the box.”
In the case of Windows and Mac OS X, we can’t know this- most of the software in those OS is not inspectable by the user. You can’t see the source code, it’s under wraps. If you are concerned with security (and unfortunately, these days you should be) then you should only trust this kind of software to the extent that you trust the vendor… and few vendors have your best interests foremost in mind.
Linux is mostly open-source. Most distributions of Linux provide closed-source BLOBs* which can be inserted into the running kernel (generally as device drivers or loadable firmware). Normal closed-source software in Linux can be sandboxed to some extent, but these kernel modules get special powers (because they are running as part of the kernel.)
It’s really hard to get a fully Free / Open source software system going. gnu.org provides a few such systems, including Gnewsense, GUIX and GNU/HURD. These are free of such binary blob drivers, and as such lack hardware support.
Binary blobs are generally provided by hardware manufacturers who want their hardware to work with Linux, but don’t want to reveal their deeeeply secret sauce to the world, fearing that it will put them at a competitive disadvantage.

When customers’ security conflicts with vendors’ intellectual property, security loses.

*“Binary Large OBject”: it doesn’t really mean much of anything,

gorillapaws's avatar

@rexacoracofalipitorius Everything you said is true of course. I do think it’s reasonable to point out that OS X comes locked down with Gatekeeper: http://en.m.wikipedia.org/wiki/Gatekeeper_(OS_X) by default. So only apps registered with Apple can run unless specifically enabled by the user to do so. Furthermore the camera is sandboxed, so it needs permission from Apple to be allowed access.

You’re right of course that it’s POSSIBLE that Apple is secretly spying on its users, although if that ever got found out (and it would) the backlash would be so violent that you’d see people smashing their Apple stuff on the doorstep of Apple Stores around the globe. There is no useful business case for them to do so, just as there is no useful business case for banks to steal all of your money—the amount they’d loose because they violated trust is orders of magnitude larger.

Dutchess_III's avatar

Do you have a cat?

RocketGuy's avatar

@rexacoracofalipitorius – great advice. My daughter’s laptop kept getting woken up by wireless LAN, then because of the camera/touchscreen setting, it would take pictures all day (luckily mostly of the inside of her laptop case). She turned off the “take pictures by tapping screen” function. The laptop somehow decided that “wake on LAN” was bad, so turned off the WiFi card – then my daughter had to reboot to get WiFi back. After I turned off “wake on wireless LAN”, all was good.

rexacoracofalipitorius's avatar

@gorillapaws Since I can’t audit the source code of Gatekeeper, I can’t verify that only apps verified by Apple (or by the user) can run. Even if I trust Apple to know which apps are malicious and which beneficent*, I can’t be sure that Gatekeeper doesn’t have some flaw that might allow an untrusted third party to run code.

In the case of Debian, the package manager by default doesn’t install unsigned packages, or indeed anything outside Debian’s “main” repository. The packages it installs are signed binaries, so I’m trusting that Debian’s keys have not been compromised, and that no Debian Developer has maliciously inserted some code to harm me or others.
All the binary packages in “main” are accompanied by source packages. If I got paranoid that the packages I was installing were tampered-with, I could download the (signed) source package, compare its hash with the published one for the source package, build the binary from the source package, and compare the built binary’s hash with the installed binary package. Anyone can do this, not just Debian users. This gives me a level of trust in the code proportional to my level of trust in Debian’s maintainers and developers. I can use this level to verify code down to the level of the compilers and debuggers, but that’s about as far as I can go.
There’s a chain of trust that extends all the way down to your CPU’s native hardware instruction set. There’s no way I can verify every step in the toolchain, but Free and Open Source software gives us the ability to check and guard against the largest classes of vulnerabilities and exploits.

In contrast, Apple recently claimed that no third party (like enforcement agencies) can force Apple to provide access to users’ devices due to changes that they have made. This is a safe claim for them to make, since we can’t review those changes nor the code to which they apply. Probably Apple really did pay some devs to make such a change… but they could have saved the money and just said they did, and we’d be none the wiser.

I don’t suspect that either Apple or Microsoft are “spying on their users” in the way you suggest. I’d be unsurprised to find out that either company is collecting data on their users that those users might not want collected. You pays your money and you takes your choice.

*Apple does not have a perfect record in this regard, though that need not make anyone distrust them.

dappled_leaves's avatar

It does not really matter whether Apple or Microsoft collects or pays attention to individual users’ data, as long as they are passing it along to the NSA anyway. I don’t know why we would give any of these companies the benefit of the doubt. Safest to assume that if it can be done, someone might be doing it. That’s not paranoia; it’s just common sense.

gorillapaws's avatar

@rexacoracofalipitorius Open source does have many security advantages. Although it’s possible for malicious coders to recognize a security vulnerability and instead of filing a bug and submitting a code fix, they could simply keep quiet and write malware that exploits the vulnerability. The model really requires lots of keen eyes of good actors to make it secure. There is a huge stack of code (as you mention) and so lots of opportunities for exploit if some nooks/crannies in the code don’t get as much attention. There’s over 70 million lines of code in Debian Linux. Ultimately there is a certain amount of faith you have to put in the process that there are enough eyes on that code to find and fix vulnerabilities faster than they can be exploited. I’m not advocating security through obscurity btw.

History has proven the model effective, but it’s certainly possible that people are exploiting unpublished exploits right now. I put faith in Apple that their Gatekeeper implementation is solid. It’s been out for several years without any major problems—a very good sign. Not being able to see the code written by a team of industry leaders with incentives to make things secure, is pretty comparable (in practice) to having access to 70+ millions of lines of code that I will never read.

I really think Apple is in a unique position with regards to data collection. They’re in the business of selling hardware at premium prices based on recognition as a premium brand. Google is in the business of collecting information about customers and using it to sell advertising, everything in the company is secondary to that mission. Microsoft is in the business of selling software, but they also have paid search advertising through Bing, so the lines get blurrier for them. Of all of these companies, Apple is the one with no real financial incentive to collect user data, and a huge potential loss if they did do so. For me it’s not blind faith, but recognizing that the incentives aren’t there for them to betray their user’s trust.

rexacoracofalipitorius's avatar

My point is not that open-source software is invulnerable. It’s that security is based on trust. If you trust Apple, then you base your security assessment on your faith in Apple (backed up by warranties, user agreements, whatever applies). If you’re a major vendor to Apple, then your faith in Apple might be backed up by SLA, a contract, or some strong legal instrument. If you bought a used iThing off Craigslist, then Apple owes you nothing and you have no reason to trust them at all until you enter into a business relationship with them. You almost certainly won’t lose anything by trusting them, but you have nothing to back you up if they don’t treat you the way you expect.

I trust Debian because I can inspect the code, because the developers are responsive to the community (they are part of it) and because I know how the development environment works.
You point out that Debian contains “70+ million lines of code”. That’s an entire operating system (actually several different ones), including ports to various architectures, and a huge amount of user software. Debian supports more architectures than Microsoft and Apple combined do, and user software is distributed in the same archive as system software, so yeah, that seems like a lot of stuff. However, my Debian system doesn’t have every package for every variant of every kernel on every architecture installed on it.
The little it does have is still too much for me to read in my lifetime, even if I was sufficiently expert in C, C++, Lisp, Scheme, Tcl, Perl, Python and Java to read other peoples’ code, which I’m not necessarily (only sometimes :^) I don’t feel I need to read it, because a minimum of three other people have done so publicly, and I can go back and look at the progress and results of that review if I want to.
I don’t read source code to find vulnerabilities in my system. I know some folks do read source to find exploits, but I’m pretty sure that most exploits are found by experiment, by the use of fuzzers and other runtime tools, and by accident. Most exploits aren’t obvious in the source, or they’d be patched before they get built, no?

I don’t trust Google Play nor Apple’s app store, because I have no idea how the software is reviewed before it’s distributed. I suspect that in Google’s case it’s not reviewed at all (except after the fact, by user reviews- it’s not nothing, but it’s not enough.) I regard binaries I install from Play as untrusted unless they are signed by an entity I trust (I trust Google to sign off on Google things, but not third-party binaries, because I don’t know that Google won’t sign things without checking them out to my satisfaction.) I do trust F-Droid in this way, because F-Droid is all FOSS.

Response moderated (Spam)

Answer this question

Login

or

Join

to answer.

This question is in the General Section. Responses must be helpful and on-topic.

Your answer will be saved while you login or join.

Have a question? Ask Fluther!

What do you know more about?
or
Knowledge Networking @ Fluther