SI
SI
discoversearch

   Technology StocksDriverless autos, trucks, taxis etc.


Previous 10 Next 10 
To: Sam who wrote (104)7/15/2017 10:17:49 AM
From: Sam
   of 149
 
Tesla's Big Moves in Self-Driving Cars
In order to justify its stock price, the automaker must become a leading autonomous mobility platform.
Billy Duberstein
( Dubs82)
Jul 14, 2017 at 5:15PM

While Alphabet's ( NASDAQ:GOOG) ( NASDAQ:GOOGL)Waymo is currently the leader in driverless-vehicle miles driven, Tesla ( NASDAQ:TSLA) isn't far behind. In fact, one could argue that with Tesla recently having traded at over $60 billion -– higher than any other U.S. automaker -- it must become a leader in autonomous driving, or else. In fact, Morgan Stanley analyst Adam Jonas believes Tesla's "platform" potential is essential to the bull case in which the company reaches an Apple-like valuation:

Could a highly successful Model 3 and its progeny achieve this? In our view, no. Could an electric, autonomous semi truck achieve this? We don't think so. Solar roofs... or Tesla Powerwalls? Not big enough. In our view, there's only one market big enough to propel the stock's value to the levels of [CEO] Elon Musk's aspirations: that of miles, data and content. When does Tesla make the leap to mobility?

Therefore, in order to even think about investing in Tesla at these levels, you need to consider where it stands in the race to a fully autonomous future. Here's what you need to know.



Image source: Tesla.

Tesla's strategy

Unlike Waymo, which is aiming straight for producing Level 4-5 autonomous cars without consideration for Level 3 features used in current vehicles, Tesla has had Level 2-3 features in its cars since 2014. Autonomous features are present in all Tesla models -- the Model S, Model X, and even the upcoming mass-market Model 3, slated to begin production this month. These capabilities include auto-steering, traffic-aware cruise control (cruise control while changing lanes), automated parking and "Summon" functionality, and driver warning systems.

But while Tesla includes all of this functionality in today's cars, it's also moving extremely quickly toward Level 4-5 fully autonomous functionality. In fact, all Teslas made with second-generation Autopilot hardware, which came out in October 2016, are capable of full Level 4 autonomy, according to management.

That means that once the company develops and is permitted to upload the necessary software to users' vehicles (two big "if"s), these newer Teslas could theoretically become fully autonomous. In fact, in a recent TED talk, Musk indicated that Level 4-5 autonomous vehicles are only two to three years away, not a decade or more, as others have hypothesized.

continues at fool.com

Share RecommendKeepReplyMark as Last Read


From: Sam7/15/2017 10:55:16 AM
   of 149
 
This piece is pretty long, but it covers a lot of ground. From some history of autonomous vehicles ("the Woodstock of robotics") to the dystopian vision of what it may entail and the problems people will encounter to the potential good stuff as well. Since we're only human, it will undoubtedly be both good and bad.


How the Bay Area took over the self-driving car business
By David R. Baker and Carolyn Said
July 14, 2017 Updated: July 14, 2017 6:00am
sfchronicle.com

excerpts:

“I can tell you that as late as 2014, I would meet with executives at car companies, and they would just smile at me, because it was known that this was crazy, and had nothing to do with reality,” said Thrun, Google’s former self-driving guru.

Today, virtually all of the world’s major automakers have jumped in. Ford, for example, has opened a bustling Palo Alto lab a mile from Tesla headquarters and has promised to have robotic taxis by 2021. And it will invest $1 billion over the next five years in Argo AI, an autonomous vehicle startup in Pittsburgh.

So what changed their minds?

In 2008, Thrun called a young Canadian roboticist named Chris Urmson to offer him a job at Google. The two had known each other for years, as colleagues and competitors.

They overlapped at Carnegie Mellon starting in the late 1990s. Long an engineering powerhouse, the school by then had become one of the world’s premier hubs for robotics research. Thrun, a native of Solingen, Germany, co-directed the university’s Robot Learning Laboratory. Urmson, a computer scientist, sought out its graduate program after seeing a poster of a Carnegie Mellon robot crawling out of a volcano.

But only after Thrun left for Stanford University in 2003 did the two get to know each other well, thanks to an event Urmson calls “the Woodstock of robotics.”

And some implications:

“There’s near-universal consensus that at least 4 million transportation workers will be displaced by this technology, and yet no one is talking about what is the plan for those workers,” said Doug Bloch, political director of the Teamsters Joint Council 7. “It’s going to be a crisis.”

Other potential drawbacks range from annoying to dystopian.

City streets full of constantly circling autonomous taxis could become perpetually crowded, even if traffic flows smoothly. Accidents might be less frequent but more deadly, as cars drive faster and closer together. Hackers could gain control of robot cars and trucks, turning them into weapons or using them to disrupt traffic. Terrorists could load a bomb into a self-driving car and send it to a target, like a low-speed cruise missile on wheels. Control centers tracking cars could invade the privacy of passengers.

These nightmare scenarios are far from certain. And yet, even analysts who think autonomous vehicles will do more good than harm say their arrival is likely to alter society in profound ways.

Car ownership will almost certainly decline, as it becomes cheaper and easier to simply hail a robot taxi. Within a couple of decades, driving one’s own car may become a hobby, like riding horses.

“People will be able to subscribe to transportation services just like they do to entertainment with Netflix or Spotify,” Lyft President John Zimmer said last year.

Share RecommendKeepReplyMark as Last Read


From: Sam7/15/2017 11:00:36 AM
   of 149
 
Self-driving car arrives in Seattle after 2,500-mile autonomous cross-country trip
by Taylor Soper
on July 14, 2017 at 9:23 am

geekwire.com

Washington Gov. Jay Inslee is already seeing results of his new executive order that encourages self-driving technology testing.

Torc Robotics, a Virginia Tech spinout developing autonomous vehicle technology, just completed a six-day 2,500-mile cross-country trip with its autonomous car that arrived in Seattle on Thursday.

It’s the first certified autonomous vehicle pilot test in Washington since Inslee signed the order last month. Torc was the first company that registered with the state’s new Autonomous Vehicle Pilot Program permit to test its self-driving car in Washington.

“I want to congratulate Torc Robotics on their cross-country autonomous vehicle trip and welcome them to our state,” Inslee said in a statement. “Washington is already a leader in autonomous vehicle technology, and these early tests demonstrate how AVs could help save lives, improve mobility and be an important tool in our efforts to combat climate change.”


The Torc team with its self-driving car in Seattle. Via Gov. Inslee’s office.

Torc Robotics has spent the past decade working on self-driving technology, but only recently began testing a consumer-grade vehicle. Michael Fleming, Torc co-founder and CEO, told GeekWire that his company picked Seattle as a destination because of the executive order.

“We are really excited about Washington’s stance on being a self-driving friendly state,” he said.

continues at the link

Share RecommendKeepReplyMark as Last Read


From: Sam7/19/2017 11:26:00 AM
   of 149
 
Interview with Andrew Ng on The1A.org

Getting Really Smart About Artificial Intelligence
Wednesday, Jul 19 2017 • 11 a.m. (ET)

Should We Be Worried About AI? What Worries You About AI? The Artificial Intelligence Behind Alpha Go
Twitter FB Discuss

the1a.org

Chances are, you’ve already encountered artificial intelligence today.

Did your email spam filter keep junk out of your inbox? Did you find this site through Google? Did you encounter a targeted ad on your way?

We constantly hear that we’re on the verge of an AI revolution, but the technology is already everywhere. And Coursera co-founder Andrew Ng predicts that smart technology will help humans do even more. It will drive our cars, read our X-rays and affect pretty much every job and industry. And this will happen soon.

As AI rises, concerns grow about the future of humans. So how can we make sure our economy and our society are ready for a technology that could soon dominate our lives?

Guests
Andrew Ng Computer scientist and expert on artificial intelligence; co-founder and co-chairman, Coursera, a free massive on-line course; founding head, Google Brain, an artificial intelligence initiative; former vice president and chief scientist, Baidu, a Chinese digital services company

Should We Be Worried About AI?

There’s clearly some public anxiety about artificial intelligence.



And why wouldn’t there be? One of the smartest humans alive, Stephen Hawking, says AI could end mankind.

But the question isn’t whether to worry about AI, it’s what kind of AI to worry about.

Tesla founder Elon Musk recently warned a gathering of governors that they need to act now to put regulations the development of artificial intelligence. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal,” he said.

This kind of concern should come with a caveat, which The Verge points out:

Musk is not talking about the sort of artificial intelligence that companies like Google, Uber, and Microsoft currently use, but what is known as artificial general intelligence — some conscious, super-intelligent entity, like the sort you see in sci-fi movies. Musk (and many AI researchers) believe that work on the former will eventually lead to the latter, but there are plenty of people in the science community who doubt this will ever happen, especially in any of our lifetimes.

To understand the threats AI may or may not pose to society, it’s best to understand the types of AI that do and don’t (yet) exist. Wait But Why has a great summary:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience.” AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board.

(Really, the full post is essential reading.)

Type 1 exists. This is what we use every day. This is what is reshaping our social networks, advertising and economy. The threat here is already visible. “Fake news” designed to hoax humans games algorithms to reach a wider audience. Automation is replacing human jobs.

Types 2 and 3 cause the anxiety. Futurist Michael Vassar, who has worked with AI, has used Nick Bostrom’s thinking on artificial intelligence to predict that “if greater-than-human artificial general intelligence is invented without due caution, it is all but certain that the human species will be extinct in very short order.”

Even though very smart people disagree over whether this AI will ever exist, the concept of a science-fiction dystopia is simultaneously terrifying and alluring. It’s easy to imagine a Terminator-like world where machines do battle with their human creators and think of it as both unlikely to happen in our lifetimes and also inevitable. And this can make it hard to think about taking steps to stop it from happening. At least one study has found that people are worried about smart machines killing them.

However, as the future of AI approaches, there are already noticeable problems in our ever-more automated present. The economy is reacting to the loss of jobs to machines. The algorithms that drive what information we see can be gamed to feed us misinformation, and even if they work as intended, they can lock us into only getting one side of every story.

“In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve,” wrote scientist Arend Hentz. “My research will not change that, though my political self – together with the rest of humanity – may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.”

more at the link plus the interview will be archived there

Share RecommendKeepReplyMark as Last Read


To: Sam who wrote (101)7/22/2017 12:16:08 PM
From: Kirk ©
   of 149
 
Just hope your car doesn't decide you are a scumbag and deserve to be dead...
SkyNet is coming. For better or worse.
I still haven't heard much discussion in the press or forums about how does a self driving car decide between two death options... run over "something" in the road, perhaps a child who ran out between two parked cars after a ball or drive off a cliff or into a tree?

Share RecommendKeepReplyMark as Last Read


From: Sam7/25/2017 4:29:23 AM
   of 149
 
Lyft exec: Even with self-driving cars, 'we will always need drivers'
  • "We will always need drivers," Raj Kapoor, chief strategy officer at Lyft, told CNBC.
  • When autonomous vehicle saturation peaks, U.S. drivers could see job losses at a rate of 25,000 a month, or 300,000 a year, according to a report from Goldman Sachs Economics Research.
Anita Balakrishnan | @MsABalakrishnan
16 Hours Ago

Ride-sharing company Lyft has announced plans to move deeper into self-driving technology — but that does not put the company at odds with its fleet of drivers, according to one top executive.

"We will always need drivers," Raj Kapoor, chief strategy officer at Lyft, told CNBC's " Squawk Box" on Monday. He also said that Lyft's big driver network is an advantage over other companies such as Tesla who are also working on self driving cars.

"What the consumer cares about the most is having a reliable experience, in addition to safety and that means that when I open up the app, I know that there's a car there in a couple minutes," Kapoor said. "For that kind of ubiquity, you need to have human drivers. We have over 700,000 drivers now. That's something that any other company would have to replicate instantly."

more text and video interview at cnbc.com

Share RecommendKeepReplyMark as Last Read


From: Sam8/3/2017 6:39:02 AM
   of 149
 
This is basically saying that combining AI and mobility is potentially useful for a lot of different things.

Apple’s Tim Cook teases autonomous tech for non-car gadgets
Brittany A. Roston - Aug 1, 2017

slashgear.com

Apple is, by all accounts, developing autonomous technology for cars, but what about other gadgets? Many items could potentially benefit from such technology, not the least of which are home appliances that need to navigate around the house without aid: a smart vacuum, perhaps, or maybe even a personal robot if we’re to look far enough into the future. Could that autonomous system one day come from Apple itself? Perhaps.

The speculation arises from a simple statement made by Tim Cook during the company’s call with investors earlier today. What did he say? That autonomous systems ‘can be used in a variety of ways. A vehicle is only one, but there are many different areas of it.’ In a perfect world he would have elaborated on what he meant, but he didn’t. Instead he said, ‘And I don’t want to go any further with that.’

Such was enough to set the Internet abuzz with chatter, and not without good reason. Sources have been saying for years that Apple is working on some type of self-driving car, with confirmation coming this summer that reveals the company is focusing specifically on an autonomous car platform.

The autonomous system was confirmed by Apple CEO Tim Cook himself this past June, though he remained quiet about the company’s plans for such technology. During a recent interview with Bloomberg, when asked whether Apple was going to sell its tech to other companies or make its own vehicle, Cook said, ‘We’ll see where it takes us. [Apple isn’t] really saying from a product point of view what we will do.’

Though the focus has largely been on self-driving cars, Cook’s latest comments have stirred up speculation that Apple’s autonomous technology efforts may expand beyond vehicles. While it is fun to image that Apple has plans to bring autonomous tech to consumers, it is just as possible that the company could be considering applications at the commercial level — things ranging from self-piloting drones to warehouse robots or smart factory machinery. It’s anyone’s guess at this point.

slashgear.com

Share RecommendKeepReplyMark as Last Read


From: Sam8/3/2017 6:47:56 AM
   of 149
 
Autonomous car levels: the steps to your car driving itself explained
17 Feb, 2017 11:00am Dean Gibson

If car makers are to be believed, autonomous cars are the next big technology - here we explain the levels of autonomy on offer Car makers are forging ahead with increased autonomy for their new cars. Firms are embracing the idea of handing control of a vehicle over to the electronics on board, and believe that one day cars will be able to drive themselves to a pre-set destination without any driver input.

While manufacturers are rushing to develop this technology, many car buyers are yet to be convinced. However, it's easy to forget that all cars come with some sort of autonomy as standard. The electric starter motor eliminated the need for a hand crank on the front of cars, while engine electronics control the fuel/air mixture going into a combustion chamber, rather than a choke.

More significant is handy tech such as auto lights and wipers, while cruise control comes as standard on a number of new cars these days. The next level of cruise control is adaptive cruise, and this is one of the first steps towards true autonomous driving. Other tech such as lane keeping, traffic jam assist and road sign recognition are all part and parcel of autonomous car technology, while car makers believe that a fully autonomous car will be on sale within the next decade.

Of course, before this can happen, the technology needs to develop to a stage where it can be left to its own devices, without any input from occupants to control the vehicle. To help realise the autonomous car dream, a scale has been developed explaining are different levels of vehicle autonomy.

The US is at the forefront of much of this technology, with firms such as Google, Ford and Tesla pioneering the tech, while many other companies are developing their autonomous cars in the States. As a result, the American vehicle safety board the National Highway Traffic Safety Administration (NHTSA) has set out guidelines for autonomous driving, splitting the different stages of autonomy into five levels.

Car makers are now using these levels to describe the amount of autonomy offered by their assorted concepts, so here we explain what these five levels of autonomy mean...

Autonomous car levels explained
Level 0: No autonomy
Examples:
Cars



It may seem obvious, but Level 0 autonomy covers all conventional cars, where the steering, throttle and brakes are all controlled by the driver. This also includes cars that feature electronic assistance devices such as forward collision warning, lane departure warning, parking sensors and blind spot monitoring, as these systems are passive and still require input from the driver to make a change in the car's direction of travel.

Level 0 also covers useful kit such as auto lights and wipers, and even indicators, as they all require input from the driver to turn them on and off. Even though this is an obvious category to be classified, it's the base level from which all autonomous technology in the following categories is measured from.

Level 1: Function-specific autonomy
Examples: Adaptive cruise control, ESP



Level 1 autonomy covers most of the electronic driver aids that are available on new cars today. This includes adaptive cruise control, emergency brake assist and even electronic stability control. The classification of Level 1 autonomy is electronic assistance that still requires the driver to have main control of the vehicle, the key point being that control of either the steering or the throttle is still the job of the driver at all times.

In addition, the electronic assistance systems work independently from each other. This means that a car fitted with adaptive cruise control and active lane keeping has one set of sensors that adjusts the speed of the vehicle according to what's ahead, while a secondary set of sensors and electronics detect and steer the vehicle if the vehicle is leaving its lane. In essence, if the cruise control detects slow-moving traffic ahead of it, then the lane keeping cannot make the car change lanes in correspondence, because the different sets of sensors aren't working together.

Self-parking is another form of Level 1 autonomy, because the driver operates the throttle while the car steers - it still requires the driver to stop and start to prevent a collision with the vehicle's surroundings.

Level 2: Combined function autonomy
Examples: Tesla Autopilot, Volvo Pilot Assist

Manufacturers have got as far as offering Level 2 autonomy on production cars. The NHTSA classifies Level 2 autonomy as technology that can take control of the vehicle in specific situations, but the driver is still expected to pay attention to the road and be able to take over control with little or no notice. The combined function of at least two pieces of technology (throttle and steering, for example) are responsible for keeping the vehicle moving, and they work together to maintain control.

The leading pioneer of this technology is Tesla, with its Autopilot system. While the Autopilot name suggests the system can take complete control of the car, the reality is that the driver still needs to pay attention to the road, and Tesla ensures this happens by giving multiple warnings and confirmation buttons to press before autonomous control is handed over to the car.

BMW's latest Remote Control Parking is a form of Level 2 combined function autonomy. This technology debuted on the latest 7 Series, and allows the driver to automatically park the car from outside using the remote control keyfob. It allows the driver to start the car, put it in gear and move it into or out of a parking space from outside the car. The parking sensors prevent the car from colliding with obstacles, and the only control the driver has from outside the car is of the throttle.

While these systems are very advanced, they do have their shortcomings. They rely on a suite of advanced sensors that 'read' the road ahead, including road markings, speed limits and other vehicles, but they are only as good as the things that they can 'see'. Human eyes are better at defining lane markings if they have deteriorated, while most of these systems become ineffective in bad weather, such as snow or heavy rain, as their sensors become covered.

Level 3: Limited self-driving autonomy
Examples: Audi A7 piloted driving concept



The next step that many car makers are embracing in prototype form is Level 3 autonomy. This hands most driving duties over to the car's electronics, but the driver will still need to take over in certain situations. However, this level of self-driving should give the driver enough advance warning that they need to take over that no critical situations should arise.

At Level 3, the electronics control the throttle, brakes and steering, and can perform lane changes and even negotiate stop-start traffic. Again, this type of autonomy currently works best on motorways due to the lack of pedestrians, parked cars and other smaller hazards, and manufacturers are concentrating on refining this technology before taking the next step of it being able to work on surface roads.

An early example of this is Audi's piloted driving concept, which we've tried in an A7 Sportback. This allows the driver to take their hands off the steering wheel for prolonged periods, and the car manages speed and steering for the whole time. Like Level 2 autonomy, the system has shortcomings in terms of bad weather, but the system will offer plenty of warning if a driving situation is beyond the abilities of the electronics.

Level 4: Fully self-driving autonomy
Examples: Google car, Volvo Drive Me



The top level of autonomous driving is Level 4. The description the NHTSA has for fully self-driving autonomy is far briefer than it is for any other level, as the car's electronics perform all driving functions and monitors the road for the entire trip. Essentially all the occupants have to do is program a destination, and the autonomous vehicle will do the rest.

While a 'driver' is described in the NHTSA's brief, in some states in the US this is considered to be the person who has activated the autonomous technology, even if they're not physically in the car. So far no manufacturer has put an unmanned autonomous vehicle on the road, and all autonomous cars have the full set of driving controls for a person to take over driving at any time. In time, manufacturers believe that a fully autonomous vehicle will do away with a steering wheel and pedals completely, but that won't happen for a few years yet.

While moving through the levels is simple enough, the step to Level 4 autonomy is possibly the biggest of all, as the software will need to be capable of reading and interpreting its surroundings as quickly as the human brain, and it will need to be able to tell the difference between a variety of obstacles and situations.

Governments will need to ensure clear markings help autonomous cars negotiate all kinds of road, while the electronics will need to be able to operate in all weather conditions. Autonomous cars also need to be able to share road space with manually operated cars, which are likely to contribute a random factor that the electronics will need to be able to cope with. As a result, while manufacturers are singing the praises of fully autonomous cars, they're still a few years away from hitting showrooms.

autoexpress.co.uk

Share RecommendKeepReplyMark as Last Read


From: Sam8/4/2017 11:09:00 PM
   of 149
 

In a self-driving car, a four-hour trip could generate more than 14 terabytes of data. But where does all of this data reside? Not all of it will be preserved, and what needs to be stored long-term will eventually be uploaded to cloud storage. Nonetheless, data will have to reside in a local memory.


NAND Flashes Its Storage Superiority in Automotive Designs

Systems like ADAS and infotainment are profoundly changing the automotive industry. However, they demand massive amounts of data processing and storage, which is where NAND steps in.

Maurizio Di Paolo Emilio | Jul 20, 2017

Download this article in PDF format.

Rapid-fire advances in high-reliability infotainment systems, advanced driver-assistance systems (ADAS), and autonomous cars are drastically altering the requirements of on-board storage. Many of these applications generate a high volume of data, which has engineers focusing more intently on coming up with efficient storage strategies.

continues at electronicdesign.com

Share RecommendKeepReplyMark as Last Read


From: Sam8/6/2017 7:38:08 AM
1 Recommendation   of 149
 
Low-Quality Lidar Will Keep Self-Driving Cars in the Slow Lane
For now, cheap laser sensors may not offer the standard of data required for driving at highway speeds.
by Jamie Condliffe July 27, 2017

The race to build mass-market autonomous cars is creating big demand for laser sensors that help vehicles map their surroundings. But cheaper versions of the hardware currently used in experimental self-driving vehicles may not deliver the quality of data required for driving at highway speeds.

Most driverless cars make use of lidar sensors, which bounce laser beams off nearby objects to create 3-D maps of their surroundings. Lidar can provide better-quality data than radar and is superior to optical cameras because it is unaffected by variations in ambient light. You’ve probably seen the best-known example of a lidar sensor, produced by market leader Velodyne. It looks like a spinning coffee can perched atop cars developed by the likes of Waymo and Uber.

But not all lidar sensors are created equal. Velodyne, for example, has a range of offerings. Its high-end model is an $80,000 behemoth called HDL-64E—this is the one that looks a lot like a coffee can. It spits 64 laser beams, one atop the other. Each beam is separated by an angle of 0.4° (smaller angles between beams equal higher resolution), with a range of 120 meters. At the other end the firm sells the smaller Puck for $8,000. This sensor uses 16 beams of light, each separated by 2.0°, and has a range of 100 meters.

To see what those numbers mean, look at the videos below. It shows raw data from the HDL-64E at the top, and the Puck at the bottom. The expensive sensor’s 64 horizontal lines render the scene in detail, while the image produced by its cheaper sibling makes it harder to spot objects until they’re much closer to the car. While both sensors nominally have a similar range, the lower resolution of the Puck makes it less useful for obstacles until they are much closer to the vehicle.

continues at technologyreview.com

Share RecommendKeepReplyMark as Last Read
Previous 10 Next 10 

Copyright © 1995-2017 Knight Sac Media. All rights reserved.Stock quotes are delayed at least 15 minutes - See Terms of Use.