How a human body functions is not so different from how a smart city does. When we feel warm, our bodies automatically begin to perspire to help us cool down. Meanwhile, smart buildings respond to changes in temperature by adjusting the air conditioning or heater.

Like smart devices, our bodies respond to environmental stimuli in real time. But the similarities end there, for while our bodies have in-built data storage and processing systems (read: DNA and the nervous system), tech requires the support of databases to function.

Amey Banarse, Vice President of Data Engineering at Yugabyte, shares how the right databases can help create more adaptable and resilient smart cities.

Databases: The foundation to smart cities

Smart cities are a way for governments to improve the lives of citizens through tech, says Banarse. But for the tech to work, they need to be fed with data.

For example, the city of HangZhou China uses AI to regulate traffic signals. This has halved travelling times for ambulances and commuters, and reduced travel times on expressways by 4.6 minutes, wrote GovInsider. These AI-powered traffic signals work by analysing traffic and weather data.

This means that governments will need databases which can handle massive amounts of data reliably and provide real-time data if they want to construct smart cities.

A database that can scale

Cloud-native databases are able to handle large amounts of data at a time, and expand if necessary. These are databases that are built and run directly on the cloud. This is necessary to ensure the functioning of smart cities, highlights Banarse.

For instance, cities such as San Francisco are focused on providing citizens with stellar connectivity and seamless citizen services.

This includes tech such as smart parking lots, where parking metre rates are automatically adjusted based on their occupancy rates. This is done to distribute parking more evenly. It has helped San Francisco citizens find parking lots about five minutes quicker, wrote Bloomberg CityLabs.

San Francisco is also known for sporting events that bring in an influx of visitors, shares Banarse. It is important for cities like these to have databases that can handle these spikes in demand when necessary, he adds.

However, traditional databases rely on physical servers, and can only store a certain amount of data per centre. This means that governments have to buy thousands of servers if they want to expand their services, which will incur high costs. These existing databases are also unable to handle scale and peak demands, says Banarse.

Yugabyte’s cloud-native databases can help solve this problem. Yugabyte helped internet service provider Plume provide smart home related services to millions of homes.

Plume’s existing data stores were either time-consuming to manage, expensive, too slow, or all of the above. With Yugabyte, Plume is now handling over 27 billion operations every day. Lag time has also been reduced from three to five seconds to just 30 to 40 milliseconds.

Yugabyte’s databases take inspiration from big tech like Facebook, which grew its user base from one million to over a billion in less than a decade, wrote GovInsider. Yugabyte’s founders learned from how they scaled to create their own databases.

The database can store more than 25 times the storage capacity of traditional databases, reveals Banarse. It is also able to process more data per second than physical data centres.

A resilient database

Resilient data stores are also necessary for the smooth functioning of smart cities, emphasises Banarse. In smart cities, any downtime in tech can cause great inconvenience, he explains.

The AI-run traffic lights in HangZhou failing, for instance, can lead to traffic congestion or accidents. Data platforms need to ensure that they stay operational even if there are natural disasters or human errors, says Banarse.

Cloud-based databases can continue operating even if such events occur, he adds. For example, a large automotive firm in the United States created a platform where drivers could see the performance of their car at any time. However, any downtime in the firm’s data stores will mean that these functions will be offline, shares Banarse.

This was a problem with their existing data stores. Whenever the company wanted to make upgrades, it would cause power outages to certain applications and data centres.

Yugabyte helped to tackle this problem with its cloud-native databases, which provided them with 24/7 up time. Cloud-native databases do not rely on a single data centre, which means that a failure at one location will not cause the entire database to fail.

Today, the firm is managing data from about 20 million cars, which they are able to do because of Yugabyte’s database. Yugabyte’s data stores also allow for real time access to data. This means that the firm could provide improved services, such as automatically sending an alert to the authorities if the programme detects that the vehicle was in an accident.

The human body is a well-tuned machine. With the right databases, smart cities too can resemble one, and bring greater convenience and services to citizens.

Learn more about cloud-native databases at DSS Asia, happening on 30 and 31 March 2022. Register for the event here

DSS Asia yugabyte