### Introduction

Leetcode 11074 is a very interesting problem. It involves multiple techniques and ideas of algorithm design.

### Problem Description

Given a matrix and a target, return the number of non-empty submatrices that sum to target. A submatrix x1, y1, x2, y2 is the set of all cells matrix[x][y] with x1 <= x <= x2 and y1 <= y <= y2.

Two submatrices (x1, y1, x2, y2) and (x1', y1', x2', y2') are different if they have some coordinate that is different: for example, if x1 != x1'.

### Analysis

We can calculate the results with \(O(N^{4})\) time complexity. Now the question is can we do better in terms of computation time? As we will see later, this is another case where we have time and space trade-off. If we want faster calculation, then we need additional space as compensation. There is no free lunch.

If we think more about the \(O(N^4)\) complexity, it seems to suggest we need to iterate through all possible submatrix. Is this really necessary?

Before going further, let's first apply a very basic technique when we design an algorithm: reduce the problem. Reducing the problem can have different flavors. For example, we can start with special cases and get a feeling of the desired behavior when handling those edge cases. Or in this case, we could simplify the problem. The problem is presented in a 2D context (i.e. matrix). To develop intuition, we could solve a 1D case first:

Given an array and a target, return the number of non-empty subarrays that sum to target.

A naive solution would have \(O(N^2)\) time complexity and in a similar way it suggests we need to iterate through all possible subarrays.

Back to our original question, is this iteration really necessary? Note that if we need to list all possible sums of the subarrays, then we don't have any choice but check all subarrays. However, in this problem, we only need to compare the sum of a subarray to the target. Listing all possible sums of the subarrays is a computation while comparing the sum of a subarray to the target is a validation. It's often the case that validation is cheaper than computation.

There is another reason why the iteration of all possibilities might not be necessary. It depends on the data in the array. If the items are independent, then there is no way to reduce the time complexity. Imagine we have a list of random integers and we want to know if all of them are strictly positive. In this case, we need to check each number in the list. However, if we know A[2k+1] = -A[2k], then we only need to check half of the array. To some extent, the save of computation time comes from the accumulation of knowledge. This is similar to dynamic programming.

Put everything together so far

- This problem is closer to a validation task rather than a pure computation task.
- The subarrays or submatrices are not independent.

Now let's see how we can take advantage of the dependencies between different subarrays and the accumulation of knowledge. Again, this is almost exactly the same as dynamic programming.

Let \(S_i\) denote a list of sums of the subarrays whose maximum index is \(i\). Note that \(S_i\) may contain duplicated values. Now let's check how we can move from \(S_i\) to \(S_{i+1}\) and how it contributes to the count of targets.

It's easy to see we have

S[i+1] = { v + A[i+1] for v in S[i] } + { A[i+1] } = { v + A[i+1] for v in (S[i] + {0}) }

The contribution of \(S_{i+1}\) is the number of the target values it has:

contributionOf(S[i+1]) = sum( { v == target for v in S[i+1] } ) = sum( {v + A[i+1] == target for v in (S[i] + {0}) } ) = sum( {v == target - A[i+1] for v in (S[i] + {0}) } )

This basically means instead of iterating all elements in \(S_{i}\) to construct new elements, we could use a different target value and the final results are the same. When we move from index \(i\) to index \(i+1\), we just need to add a zero to the list \(S\).

(To have a full implementation, we need to treat \(S\) as a hash map because we need to get the counts.)

----- END -----

©2019 - 2021 all rights reserved