![]() ![]() With the arrival of developer kits, such as the NVIDIA’s Jetson Nano, specialized on running artificial intelligence workloads, hardware is no longer a constraint in the deployment of deep learning applications. Therefore, many state-of-the-art commercial single-board computers are not suitable hardware platforms to perform the mentioned task since they are single-thread performance-optimized architectures. However, training a deep learning network is a computational intensive task that requires many matrix multiplication operations in parallel. Its working principle is based on multi-layered neural networks, which are trained using boundless datasets. Therefore, edge computing arises as a solution to resolve this drawback by filtering and processing some information directly in the capturing device and so reducing the bandwidth utilization required for transferring data to the cloud.ĭeep learning is the default method employed to draw information from digital photos and videos nowadays due to its flexible architectures to learn from raw input data, increasing accuracy in the prediction during object detection operations. However, the computation in real-time of all that information is not achievable in the majority of the cases due to bottlenecks caused by the enormous data volumes that HD cameras can generate in a set period of time. The deployment of a surveillance circuit along and across a city requires high data bandwidths to send the images taken from High Definition cameras to a server, where they can be processed. Nevertheless, the settlement of that network comes with certain challenges that need to be tackled to avoid service shortages. For instance, the cities of the future, also known as smart cities, are supposed to deploy camera networks to provide surveillance to prevent criminality, manage traffic efficiently, and reduce energy usage. One of the main use cases of Deep Learning applied to digital images and videos is object detection, which has been applied massively in the development of self-driving cars and intelligent video analytics. ![]() ![]() Image processing and computer vision have become well-established trends in artificial intelligence because they use the data gathered in digital images and videos to deduce information using Deep Learning algorithms. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |