Started by Tom NelsonSep 7, 2021

Open
Optimize Postgress DB having nearly 150 mil records

0 VIEWES 0 LIKES 0 DISLIKES SHARE
0 LIKES 0 DISLIKES 0 VIEWES SHARE

I'm using Azure Postgress DB to store around 150 mil records. Specs of the DB are: 160GB storage and around 4GB RAM. The Dataset is around 80GB when loaded into the DB. The data set will not going to grow but will remain around 80 - 85 GB.

Most of the queries are based on 4 fields:

Address (text) 
Longitude (double)
Latitude (double)
PropertyId (big int)

I've implemented indexes on all these fields. For the Address - GIN, others B-Tree.

But the problem is still the queries are slow i.e.

select * from my_table 
where Latitude between x and y
and Longitude between p and q
and address like '%address%';

takes minutes...

I analysed the queries using explain analyse, which shows the queries indeed use indexes.

So my question is, are there any ways to improve the performance (SQL wise)? Is there a rule of thumb when calculating the memory and storage requirements given the data size? what could be the minimum adequate hardware requirements to get results under 2 seconds?

0 Replies

You must be Logged in to reply
Techiio

Techiio is on the journey to build an ocean of technical knowledge, scouring the emerging stars in process and proffering them to the corporate world.

Follow us on:

facebooklinkdeintwitter

Subscribe to get latest updates

You can unsubscribe anytime from getting updates from us
Copyright techiio.com @2020 Kolkata, India
made with by Abhishek & Priyanka Jalan
Copyright techiio.com @2020
made with by Abhishek & Priyanka Jalan