Self-driving cars are becoming a reality. While not yet widespread, autonomous vehicles are starting to hit the roads for testing in a number of states.
Some drivers may find this trend alarming. The safety concerns are obvious. Can a computer really drive like a human? At the moment, the answer seems to be no. However, it is not that autonomous vehicles are less safe than human drivers. On the contrary, they tend to be cautious — so cautious, in fact, that humans don’t know how to properly interact with them.
This disparity in driving styles is largely responsible for the risk of accidents between human drivers and self-driving cars.
An autonomous vehicle is a car that is capable of driving itself. Instead of human input, an autonomous vehicle relies on signals from a computer to know when and how to move.
Every decision an ordinary driver makes on the road — when to brake, when to change lanes, how quickly to accelerate, when the coast is clear to make a left turn, for example — is made instead by artificial intelligence. This intelligence is designed to take in all the information around it, including what human drivers on the road are doing, as a basis for its decisions.
What’s more, the computers driving these cars adapt to learn how to handle new driving situations. The designers of autonomous vehicles subject them to simulations so that they can teach themselves to be a better driver all on their own.
While a number of states have introduced legislation regulating the use of autonomous vehicles, South Carolina law is relatively silent on the matter. The only here deals with the very specific situation of a number of driverless cars moving together in a platoon, and how much space they must keep between them.
The lack of other legislation on the issue essentially means that autonomous vehicles are not specifically prohibited from driving on South Carolina roads. This does not mean you are especially likely to encounter a driverless car in the Palmetto State.
The bulk of testing of these vehicles takes place out west, particularly in California. However, research on autonomous vehicles , and because of the lax regulatory environment, it is not entirely out of the question that these vehicles could soon make their way to other parts of the state.
More often than not, it is not autonomous vehicles that make mistakes, but rather humans that make mistakes about how autonomous vehicles drive. This is because artificial intelligence driving a car does not think in the same way as a human.
. This means that they will never drive over the speed limit, nor will they ever roll stop signs, in contrast to human drivers, who frequently drive with the flow of traffic and forego a complete stop when nothing is coming.
Furthermore, autonomous vehicles tend to act more cautiously even when it is not a question of law. For example, a driverless car might give cross traffic more time to pass before taking a left turn than a human driver would.
This abundance of caution may become dangerous when human drivers put themselves on autopilot, so to speak. Many drivers develop expectations as to how other drivers on the road will behave, and interact with other vehicles based on these expectations, rather than thinking about each car as they see it act.
For example, a driver might assume that the car in front of them will go straight through a light that has just turned yellow rather than slowing to a stop. If the car in front is actually an autonomous vehicle acting with extreme caution, that assumption could prove to be false and could result in the human driver rear-ending the driverless car.
Because a large proportion of automated vehicle accidents occur around intersections and involve stopping or accelerating, these accidents often take places at relatively low speeds and usually do not cause serious injuries. However, this is not to say that you cannot be injured in an accident with a driverless car under the wrong circumstances.
It should be noted that, although humans at least nominally cause most accidents with driverless cars, there are exceptions. In California, a hotbed of autonomous vehicle testing, state regulators require companies to report when a human driver had to step in to redirect a car that was being tested. Between 2014 and 2015, Google alone averted involving its autonomous vehicles only because of intervention by a human driver, and corrected at least 200 more less-serious failures.
It is possible that once the testing phase is over and more driverless cars hit the road without a human driver as a failsafe to correct their mistakes, these vehicles may start to be the cause of accidents more often.
If you find yourself in an accident with an autonomous vehicle, the lack of a driver should not change much about your next steps. It is still best to alert the authorities, and if there is any chance you might be injured, you should seek medical attention immediately.
As for liability, the lack of guidance in South Carolina law essentially means that whoever is operating the driverless car could be responsible for any mistakes it may have made, although these mistakes would most likely have to be actual malfunctions rather than simply driving too safely.
Just how far this responsibility extends is a legal gray area. It will take an experienced personal injury lawyer to build a convincing case against whomever is operating the car. If you have been injured in any vehicle accident and you believe someone else was at fault, contact the Florence car accident attorneys of Jebaily Law Firm today for information about how we can help.