Design and Implementation of Autonomous Car using Raspberry PiRishi SidanaRitish BansalRobinjot SinghShantanu Aggarwal101403152101403153101403154101403167ABSTRACTAn autonomous car and unmanned ground vehicle is a vehicle that is capable of sensing its environment and navigating without human input. The technical reality of autonomous cars is coming sooner than is thought. With the number of accidents increasing day by day, it has become important to take over the human errors and help the mankind. All of this could come to an end with self-driving cars which just need to know the destination and then let the passengers continue with their work. This will avoid not only accidents but also bring a self-relief for minor day to day driving activitiesThe autonomous vehicle concept all started with the advancement of driver assistance and has extended to a new level that now it is semi-autonomous and autonomous vehicleThe project aims to build and implement the forward collision avoidance and it also follows a particular path in case of a restricted environment or crowded environments. An HD camera along with an ultrasonic sensor is used to provide necessary data from the real world to the car.
The car is capable of reaching the given destination safely and intelligently thus avoiding the risk of human errors. Many existing algorithms like lane detection, obstacle detection are combined together to provide the necessary control to the car.INTRODUCTIONNeed of Autonomous Vehicle In this project, an autonomous raspberry controlled car was built using supervised learning of a neural network with a single hidden layer. We will use a remote-controlled car with a Raspberry Pi and a Raspberry Pi camera module mounted on top. In the training mode, the camera module would provide images needed to train the neural network and in the autonomous mode; would provide the images to the trained model to predict the movements and direction of the car.An HD camera along with an ultrasonic sensor is used to provide necessary data from the real world to the car like lane detection and detecting obstacles.
The car is capable of reaching the given destination safely and intelligently by getting the input of initial and the final point, thus avoiding the risk of human errors. Many existing algorithms like lane detection, obstacle detection is combined together to provide the necessary control to the car.With the number of accidents increasing day by day, it has become important to take over the human errors and help the mankind. All of this could come to an end with self-driving cars which just need to know the destination and then let the passengers continue with their work. This will avoid not only accidents but also bring a self-relief for minor day to day driving activities for small items.HARDWARE DESIGNList of HardwaresA pre-built four wheel drive (4WD) chassis is used as a base on which following hardware components are fit:Raspberry Pi (rev B) for GPU and CPU computationsL shaped aluminium strip to support cameraMotor driver IC L293D to control two motors8 AAA batteriesJumper wires to connect individual componentsWi-Fi 802.11n donglePi cameraUltrasonic sensor to detect obstacles.Hardware and Software Description2.
2.1. Raspberry PiThe Raspberry Pi hardware has evolved through several versions that feature variations in memory capacity and peripheral-device support. This block diagram depicts Models A, B, A+, and B+. Model A, A+, and the Pi Zero lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to an additional USB port. In Model A, A+, and the Pi Zero, the USB port is connected directly to the system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip contains a five-point USB hub, of which four ports are available, while the Pi 1 Model B only provides two.
On the Pi Zero, the USB port is also connected directly to the SoC, but it uses a micro USB (OTG) port.Fig 1: Features offered in Raspberry Pi Model2.2.2.
Wi-Fi 802.11n dongle802.11n is wifi dongle of 1000mW of power which is more powerful than ANY other WiFi Dongle on the market. This is an 802.11n wireless. This USB adapter has the maximum output power of 1 watt (that’s 1000 mW).
This is a single band (2.4 GHz) 1T1R 802.11n adapter with maximum transfer speeds of 150 megabits per second (mbps).2.2.3 Motor driver IC L293DL293D is a dual H-bridge motor driver integrated circuit (IC).
This higher current signal is used to drive the motors. L293D contains two inbuilt H-bridgedriver circuits. In its common mode of operation, two DC motors can be driven simultaneously, both in forward and reverse direction.2.2.4.
Pi Camera It is the camera shipped along with Raspberry Pi. Pi camera module is also available to which can be used to take high-definition videos as well as still photographs. 2.2.5.
Ultrasonic Sensors An Ultrasonic sensor is a device that can measure the distance to an object by using sound waves. It measures distance by sending out a sound wave at a specific frequency and listening for that sound wave to bounce back. It is important to understand that some objects might not be detected by ultrasonic sensors.Fig 2: Ultrasonic Sensor2.2.
6. Raspbian OS Of all the operating systems Arch, Risc OS, Plan 9 or Raspbian available for Raspberry Pi, Raspbian comes out on top as being the most user-friendly, best-looking, has the best range of default softwares and optimized for the Raspberry Pi hardware. Raspbian is a free operating system based on Debian (LINUX), which is available for free from the Raspberry Pi website.2.2.7 Python Python is a widely used general-purpose, high-level programming language.
Its syntax allows the programmers to express concepts in fewer lines of code when compared with other languages like C, C++ or java. 2.2.7 OpenCV OpenCV (Open Source Computer Vision Library) is an open source computer vision and machine learning software library. OpenCV was built to provide a common. The library has more than 2500 optimized algorithms, which includes a comprehensive set of both classic and state-of-the-art computer vision and machine learning algorithms.
infrastructure for computer vision applications and to accelerate the use of machine perception in the commercial products. Being a BSD-licensed product, OpenCV makes it easy for businesses to utilize and modify the code. Hardware Components Connection The 4 wheels of the chassis are connected to 4 separate motors.
The motor driver IC L293D is capable of driving 2 motors simultaneously. The rotation of the wheels is synchronized on the basis of the sides i.e. the left front and left back wheels rotate in sync and right front and right back-wheels rotate in sync. The IC is fixed on the lower shelf with the help of two 0.5 inch screws.
It is permanently connected to the motor wires and necessary jumper wires are drawn from L293D to connect to Raspberry Pi. Thus the pair of motors on each side is given the same digital input from L293D at any moment. This helps the car in forward, backward movements when both side wheels rotate in same direction with same speed. The car turns when the left side wheels rotate in opposite direction to those in right. The chassis has two shelves over the wheels separated by 2 inch approx.
The rest of the space on the lower shelf is taken by 8 AA batteries which provide the power to run the motors. To control the motor connected to pin 3 (O1), pin 6 (O2), the pins used are pin 1, pin 2 and pin 7 which are connected to the GPIOs of Raspberry pi via jumper wires.Table I Truth Table to Control the Left MotorPin 3Pin 6Pin 7FunctionHighHighLowAnti-ClockwiseHighLowHighClockwiseHighHighHighStopHighLowLowStopLowXXStopHigh +5V, Low 0V, X=either high or low (don’t care)Fig 2: L293D IC motor driverFig 3: Hardware ConnectionsThe raspberry pi case is glued on the top shelf along with the L shaped aluminum strip.The Wi-Fi dongle is attached to the USB port in Raspberry Pi in order to connect to it wirelessly. The complete connection of the raspberry pi with motor controller L293D can be found in fig 2. For the same we need to make some changes in the field specified so as to make raspberry pi recognize the router every time it boots up. Since raspberry pi needed its own IP, it needs to be connected to a Wi-Fi router or Hotspot. Fig 4: Hardware Component ConnectionsALGORITHMSLane Detection AlgorithmTraditionally, lane could be detected by two approaches namely feature based technique and model based technique.
The feature based technique localizes the lanes in the road images by combining the low-level features, such as painted lines or lane edges etc. Moreover, it has the disadvantage of not imposing any global constraints on the lane edge shapes, this technique may suffer from occlusion or noise. Assuming the shapes of lane can be presented by either straight line or parabolic curve, the processing of detecting lanes is approached as the processing of calculating those model parameters. To estimate the parameters of lane model, the likelihood function, Hough transform, and the chi-square fitting, etc. are applied into the lane detection. This way, the model-based technique is much more robust against noise and missing data, compared with the feature-based technique. However, as the most lane models are only focused on certain shapes of road, thus they lack the flexibility to modeling the arbitrary shape of road.
In the proposed algorithm to detect the lanes, a combination of feature and model base is used. In general, this algorithm is valid for all kind of roads (whether they are marked with white lanes or not). The overall method consists of 7 major parts:Extract the color range for the road The first step of the algorithm converts the image obtained into a binary image by extracting the color range for the road (directly in front of the car itself) and the areas that are assumed to not be the road. Determining the color range is only performed when no color range exists, or the color range is deemed incorrect by step 6 of the algorithm. Define the region of interest The second step in the algorithm generates a region of interest that is usually no more than half of the image since much of the upper half of the image is not necessary (sky and other horizon features).
This Region of Interest (ROI) starts from the bottom up due to the position of the car being located closer to the bottom part of the image and is essentially a trapezium shape.Fig 5: Define the region of interestConvert complex region of interest into simple shape The third step is to use approximations and generate contours of the binary image, within the region of interest. This simplifies the image greatly and reduces a lot of noise as well.Determine the shape of the road The fourth step draws Hough lines on the image generated by step 3 and creates well defined straight lines for possible road edges (some of which are actually just noise).Filtering the noise The fifth step reduces the Hough line noise by removing lines that are surely not road edge lines.
For example if a line on the left side of the image and is tilted to the left, then that line is surely not an edge for the left side of the road, since the line edges will actually tilt inwards (from bottom up) similar to the left and right edges in a trapezium itself actually.Fig 6: Convention for positive interceptsMake it robust and unaffected from noise The sixth step implements a fault tolerance for calculating the road edges. Since the roads are usually very smooth and not abrupt, if the lane edges that are detected do not line up properly with the previous line edges, the lane edges calculated are discarded. If this occurs 3 times in succession, the color range from step 1 is recalculated to allow for this step to actually succeed. This sort of event can be caused by temporary irregularities in the road that may occur.
Such as patches of dirt that match the road color range, thus throw off where the edge of the road is believed to be, three frames of this sort of error can be ignored since within 3 frames the same path should still be valid, but after 3 frames, the color range needs to be recalculated. Determine the turns in the road, if any, and give directions to the car The final step of the algorithm is to take the region of interest and divide the portions into 20%, 30%, and 50% regions (from the bottom up). Each region is compared to determine essentially if there’s some change in the road such that the road may contain a turn. If a turn does exist, the car’s motion is changed to allow for the car to follow the turn ahead.
Fig 7: Division of region of interest in 2:3:5Obstacle AvoidanceIn robotics, obstacle avoidance is the task of satisfying the control objective subject to non-intersection or non-collision position constraints. Normally obstacle avoidance involves the pre-computation of an obstacle-free path along which the controller will then guide a robot. Though inverse perspective mapping helps to find the distance of the objects far away from the car with the help of known camera parameters and generating a model but it takes more computations. Using ultrasonic sensor is better option in this case as it doesn’t require high CPU computations and detects the obstacles as well as help us finding the distance.
Ultrasonic sensors used to detect the distance of nearby flat objects so as to avoid obstacles in general. This is a very low power device and has a very extensive use in mini autonomous robots and cars. The working can be explained as transmitting a low frequency sound from the sensor to the object which after reflection is received by the receiver of the sensor.
Depending on the time taken to receive the reflected signal, the distance of the nearby vehicle or any other obstacles detected. One demerit of this approach is if the reflecting surface is at certain angle with the sensor the distance measure may be ambiguous and had to be supported with other techniques like OpenCV and image processing before making any decision about the turn.Fig 8: Concept of ultrasonic sensorThe ultrasonic sensor is mounted on a servo motor at the front of the chassis. The sensor rotates periodically and checks for the potentially threatening obstacles which may or may not be in the line of motion but may hit the car if precaution is not taken.Algorithm:Watch the surrounding after a fixed interval of time i.e. 300ms.
The following steps are repeated every interval. Watch the surroundings to calculate the distance of the obstacles from the car.The minimum threshold distance that is safe for the car is 1 meter.
If the distance calculated comes out to be lesser than threshold, stop the car and check other sides.Rotate the car and move ahead. Development And Implementation PhasesDevelopment for the car was divided into three separate sections to slowly implement portions of the car necessary for autonomous functionality. Development And Implementation Phase 1The first phase was to implement a way to remotely control the car manually. To do this the authors directly connect to the Raspberry Pi via to PC and manually give the input left, right, forward, backward using A, D, W ,S keys respectively. Development And Implementation Phase 2The second phase was to implement the Ultrasonic sensors to perform object detection for flat objects and calculate the distance of the object from the car and stop the car when the object is detected and rotate itself around an obstacle if one is found within 1 meter or less of the car. Development And Implementation Phase 3The third phase was to implement the lane detection algorithm using pi camera and to capture the video and from that video the lane can be detected using the 7 phases of lane detection algorithm as mentioned above.
Development And Implementation Phase 4The third phase was to put everything developed and created together and put the car for a test drive.ResultsThe results were that the car was able to perform basic autonomous actions including obstacle avoidance, and recognizing an unknown road, and driving down that road on it’s own.ConclusionThe conclusion of the paper was that that attempts to create an autonomous car were successful. My personal conclusion is that as well, but with the hopes that more research can be done to simplify the implementation of autonomous driving on actual human drivable automobiles more similar to what Google and Tesla have and are doing currently.REFERENCES 1 Self driving cars: The new paradigm’https://orfe.
princeton.edu/~alaink/SmartDrivingCars/PDFs/Nov2013MORGANSTANLEY-BLUE-PAPER-AUTONOMOUS-CARS%EF%BC%9A-SELF-DRIVING-THE-NEW-AUTO-INDUSTRYPARADIGM.pdf”2 Self driving cars. Washington University”https://faculty.
washington.edu/jbs/itrans/self_driving_cars%5B1%5D.pdf”3 Autonomous Car Wikipediaen.wikipedia.
org/wiki/Autonomous_car4 IEEE spectrum:https://spectrum.ieee.org/static/the-self-driving-car5 Narathip Thongpan, & Mahasak Ketcham, The State of the Artin Development a Lane Detection for Embedded Systems Design,Conference on Advanced Computational Technologies & CreativeMedia (ICACTCM’2014) Aug. 14-15, 20146 Driverless: Intelligent Cars and the Road Ahead by Hod Lipson(Author), Melba Kurman (Author)7 Chris Urmson How a driverless car sees the roadwww.youtube.com/watch?v=tiwVMrTLUW8 Autonomous car by University of California, Berkley CSEDDepartment https://www.theverge.com/autonomous-cars9 Hamuchiwa- OpenCV Python AUTORC CARhttps://github.com/hamuchiwa/AutoRCCar/tree/master/Traffic_signal10 Misconceptions Regarding Autonomous Carhttp://www.driverless-future.com/?page_id=77411 GitHub- RyanZotti/ Self-Driving-Carhttps://github.com/RyanZotti/Self-Driving-Car12 Design and Implementation of Autonomous Car using RaspberryPi http://research.ijcaonline.org/volume113/number9/pxc3901789.pdf13 The Driver in the Driverless Car: How Our Technology ChoicesWill Create the Future by Wadhwa (Author), Alex Salkever