How to Optimizing Pagination with Multiple Joins on Large Datasets (10 Million Rows)
How to Optimizing Pagination with Multiple Joins on Large Datasets (10 Million Rows)
Hello everyone,
I am currently working on a project where I need to paginate a very large dataset with around 10 million rows. I have implemented pagination using a CTE (Common Table Expression), and it performs well without significant delays, especially when compared to using the traditional FETCH method in SQL Server. However, I am encountering performance issues when combining the pagination with multiple JOIN operations, particularly with temporary tables. The query becomes significantly slower as more joins are added.
Here is a quick summary of the scenario:
- Dataset with 10 million rows.
- Using a CTE for pagination, which is performing well.
- The issue arises when I introduce several JOIN operations, especially with temporary tables.
I am looking for recommendations on more optimal ways to handle this situation. Specifically, I would like to know if there are any strategies or query optimizations I could use to improve the performance of pagination when dealing with large datasets and multiple joins.
Any insights or suggestions would be greatly appreciated!
Thank you in advance for your help!
Answers
First of all I would try to use views and put the complex joins inside those views. That could already give you a significant improvement.
Then look at the indexing of your database tables. Do you have indexes on the columns you are joining? If the bottleneck is reading and not inserting or updating I'd rather add more indexes and give that a try, too.
I've added indexes to the columns, for the initial start page, it's fast, but when the page is deep, it's slow and need long time.
Have you tried server side processing?
https://datatables.net/reference/option/serverSide
Yes, what I'm doing right now is server side processing, my problem is in the query or something like that, is there a solution?
Have you tried this?