Engineers generally design things such as conveyances – planes, ships, locomotives and cars, to name a few – so that they can always be overridden by human operators when the need arises.
That’s why subway and railway engines have “deadman switches”, which means that the operating engineer has to keep a grip on it, or when it is released it applies the brakes, shuts down the engine, stops the train and signals to its control yard that it has done so. It’s why planes that fly 90% of the time on autopilot can be overtaken at any time by the pilot. And it’s why (so far, anyway) “driverless” cars include window glass, steering wheels, brake pedals and accelerators – a truly “self-driving and only self-driving” car would have no need of any of those things.
Which is not to say that a self-driving car must be made without those things, but properly engineered, even that car will have an override built in so that an operator can disconnect a failed or failing controller, engage a redundant system (another engineered feature) that will take over for emergency stops and/or controlled parking, or at the very least simply shut down the vehicle wherever it is, and let other drivers (or driverless cars) maneuver around it as any other roadway obstruction or hazard. I would imagine that the fail safe process on a driverless car would be “pull over and come to a complete stop”, turn on the blinkers, and send out an emergency signal to some agency that probably hasn’t even been invented yet.
On the other hand, even the best engineered systems can fail, the redundant systems can fail, the backup system can fail, the override can fail, and the human operator be incompetent or incapable of taking control. Accidents will happen. And when they do, engineers go back to work to figure out how to address those faults, make the systems harder, more robust, able to deal with additional and less-predictable inputs, and continually safer. After all, we’re not simply driving “horseless carriages” any more, either, are we?