P粉6628028822023-08-31 13:18:44
I would almost certainly choose Silver's solution
Your Statement:
makes no sense. Splitting/slicing/stitching the two result sets together is unlikely to have a measurable impact on performance. It certainly won't make "requests slow"!
This is a SQL solution for your scenario, but it will almost certainly be slower than stitching two result sets together, depending on the size of the tables involved.
-- this cte just gives is a contiguous sequence from 1 to number of toys WITH RECURSIVE seq (n) AS ( SELECT 1 UNION ALL SELECT n + 1 FROM seq WHERE n < (SELECT COUNT(*) FROM toys) ) SELECT title, rank_index, created_at, n FROM ( -- we now add row_number to the seq after removing the seq numbers -- already used by rank_index SELECT seq.n, ROW_NUMBER() OVER (ORDER BY seq.n) AS rn FROM seq WHERE NOT EXISTS (SELECT 1 FROM toys WHERE rank_index = seq.n) ) x JOIN ( -- get toys without rank_index and add row_number for join to prev subquery SELECT *, ROW_NUMBER() OVER (ORDER BY created_at DESC) rn FROM toys WHERE rank_index IS NULL ) y USING (rn) UNION ALL SELECT title, rank_index, created_at, rank_index FROM toys WHERE rank_index IS NOT NULL -- applies to the result of UNION ORDER BY n;
If you have more than 1000 toys, the recursive cte will reach the default value cte_max_recursion_depth, as explained here.
You can remove the restriction by running the following command before the above query:
SET SESSION cte_max_recursion_depth = 10000; -- permit 10,000 iterations SET SESSION cte_max_recursion_depth = 0; -- unlimited iterations
Or change the recursive CTE to a non-recursive CTE with "nofollow noreferrer">ROW_NUMBER() on the toys table:
WITH seq (n) AS ( SELECT ROW_NUMBER() OVER (ORDER BY id) FROM toys )
This is a playable db<>fiddle.
P粉5712335202023-08-31 00:35:47
I have encountered it this year. Sorting directly in the query is more complicated. You can refer to this questionMySQL result set is sorted by fixed positionIf you want to understand its complexity in depth.
What I did before was relatively simple and was completed through two queries,
This is an example you can refer to
$perPage = 10; $page = request('page') ?? 1; $start = ($page - 1) * $perPage + (1); // get the start number of the pagination $end = $perPage * $page; // get the end number of the pagination //query the pinned items with fixed position between start and end of the current pagination $pinned = Model::select('title','rank_index','created_at')->whereBetween('rank_index', [$start, $end])->get(); //standard pagination query, exclude the pinned items (if rank_index has value) //you can also subtract the pinned result count on pagination if you want i.e. ->paginate( $perPage - $pinned->count() ) //but I prefer to leave it and modify the limit on the collection as to not messed the pagination per_page value which could potentially messed-up the front-end $result = Model::select('title','rank_index','created_at')->whereNull('rank_index')->orderBy('created_at', 'DESC')->paginate( $perPage ); // insert the pinned items to the pagination data with index based on rank_index value $pinned->sortBy('rank_index')->each(function ($item) use (&$result) { $index = $item['rank_index'] - 1; $result->splice($index < 0 ? 0 : $index, 0, [$item]); }); //making sure to only take the total of perPage incase there is a pinned item inserted on the paginated data $result->setCollection($result->take($perPage)); return [ 'start' => $start, 'end' => $end, 'pinned' => $pinned, 'result' => $result ];