ChatGPT解决这个技术问题 Extra ChatGPT

What is the difference between bottom-up and top-down?

The bottom-up approach (to dynamic programming) consists in first looking at the "smaller" subproblems, and then solve the larger subproblems using the solution to the smaller problems.

The top-down consists in solving the problem in a "natural manner" and check if you have calculated the solution to the subproblem before.

I'm a little confused. What is the difference between these two?


1
12 revs, 5 users 87%

rev4: A very eloquent comment by user Sammaron has noted that, perhaps, this answer previously confused top-down and bottom-up. While originally this answer (rev3) and other answers said that "bottom-up is memoization" ("assume the subproblems"), it may be the inverse (that is, "top-down" may be "assume the subproblems" and "bottom-up" may be "compose the subproblems"). Previously, I have read on memoization being a different kind of dynamic programming as opposed to a subtype of dynamic programming. I was quoting that viewpoint despite not subscribing to it. I have rewritten this answer to be agnostic of the terminology until proper references can be found in the literature. I have also converted this answer to a community wiki. Please prefer academic sources. List of references: {Web: 1,2} {Literature: 5}

Recap

Dynamic programming is all about ordering your computations in a way that avoids recalculating duplicate work. You have a main problem (the root of your tree of subproblems), and subproblems (subtrees). The subproblems typically repeat and overlap.

For example, consider your favorite example of Fibonnaci. This is the full tree of subproblems, if we did a naive recursive call:

TOP of the tree
fib(4)
 fib(3)...................... + fib(2)
  fib(2)......... + fib(1)       fib(1)........... + fib(0)
   fib(1) + fib(0)   fib(1)       fib(1)              fib(0)
    fib(1)   fib(0)
BOTTOM of the tree

(In some other rare problems, this tree could be infinite in some branches, representing non-termination, and thus the bottom of the tree may be infinitely large. Furthermore, in some problems you might not know what the full tree looks like ahead of time. Thus, you might need a strategy/algorithm to decide which subproblems to reveal.)

Memoization, Tabulation

There are at least two main techniques of dynamic programming which are not mutually exclusive:

Memoization - This is a laissez-faire approach: You assume that you have already computed all subproblems and that you have no idea what the optimal evaluation order is. Typically, you would perform a recursive call (or some iterative equivalent) from the root, and either hope you will get close to the optimal evaluation order, or obtain a proof that you will help you arrive at the optimal evaluation order. You would ensure that the recursive call never recomputes a subproblem because you cache the results, and thus duplicate sub-trees are not recomputed. example: If you are calculating the Fibonacci sequence fib(100), you would just call this, and it would call fib(100)=fib(99)+fib(98), which would call fib(99)=fib(98)+fib(97), ...etc..., which would call fib(2)=fib(1)+fib(0)=1+0=1. Then it would finally resolve fib(3)=fib(2)+fib(1), but it doesn't need to recalculate fib(2), because we cached it. This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root.

example: If you are calculating the Fibonacci sequence fib(100), you would just call this, and it would call fib(100)=fib(99)+fib(98), which would call fib(99)=fib(98)+fib(97), ...etc..., which would call fib(2)=fib(1)+fib(0)=1+0=1. Then it would finally resolve fib(3)=fib(2)+fib(1), but it doesn't need to recalculate fib(2), because we cached it.

This starts at the top of the tree and evaluates the subproblems from the leaves/subtrees back up towards the root.

Tabulation - You can also think of dynamic programming as a "table-filling" algorithm (though usually multidimensional, this 'table' may have non-Euclidean geometry in very rare cases*). This is like memoization but more active, and involves one additional step: You must pick, ahead of time, the exact order in which you will do your computations. This should not imply that the order must be static, but that you have much more flexibility than memoization. example: If you are performing fibonacci, you might choose to calculate the numbers in this order: fib(2),fib(3),fib(4)... caching every value so you can compute the next ones more easily. You can also think of it as filling up a table (another form of caching). I personally do not hear the word 'tabulation' a lot, but it's a very decent term. Some people consider this "dynamic programming". Before running the algorithm, the programmer considers the whole tree, then writes an algorithm to evaluate the subproblems in a particular order towards the root, generally filling in a table. *footnote: Sometimes the 'table' is not a rectangular table with grid-like connectivity, per se. Rather, it may have a more complicated structure, such as a tree, or a structure specific to the problem domain (e.g. cities within flying distance on a map), or even a trellis diagram, which, while grid-like, does not have a up-down-left-right connectivity structure, etc. For example, user3290797 linked a dynamic programming example of finding the maximum independent set in a tree, which corresponds to filling in the blanks in a tree.

example: If you are performing fibonacci, you might choose to calculate the numbers in this order: fib(2),fib(3),fib(4)... caching every value so you can compute the next ones more easily. You can also think of it as filling up a table (another form of caching).

I personally do not hear the word 'tabulation' a lot, but it's a very decent term. Some people consider this "dynamic programming".

Before running the algorithm, the programmer considers the whole tree, then writes an algorithm to evaluate the subproblems in a particular order towards the root, generally filling in a table.

*footnote: Sometimes the 'table' is not a rectangular table with grid-like connectivity, per se. Rather, it may have a more complicated structure, such as a tree, or a structure specific to the problem domain (e.g. cities within flying distance on a map), or even a trellis diagram, which, while grid-like, does not have a up-down-left-right connectivity structure, etc. For example, user3290797 linked a dynamic programming example of finding the maximum independent set in a tree, which corresponds to filling in the blanks in a tree.

(At it's most general, in a "dynamic programming" paradigm, I would say the programmer considers the whole tree, then writes an algorithm that implements a strategy for evaluating subproblems which can optimize whatever properties you want (usually a combination of time-complexity and space-complexity). Your strategy must start somewhere, with some particular subproblem, and perhaps may adapt itself based on the results of those evaluations. In the general sense of "dynamic programming", you might try to cache these subproblems, and more generally, try avoid revisiting subproblems with a subtle distinction perhaps being the case of graphs in various data structures. Very often, these data structures are at their core like arrays or tables. Solutions to subproblems can be thrown away if we don't need them anymore.)

[Previously, this answer made a statement about the top-down vs bottom-up terminology; there are clearly two main approaches called Memoization and Tabulation that may be in bijection with those terms (though not entirely). The general term most people use is still "Dynamic Programming" and some people say "Memoization" to refer to that particular subtype of "Dynamic Programming." This answer declines to say which is top-down and bottom-up until the community can find proper references in academic papers. Ultimately, it is important to understand the distinction rather than the terminology.]

Pros and cons

Ease of coding

Memoization is very easy to code (you can generally* write a "memoizer" annotation or wrapper function that automatically does it for you), and should be your first line of approach. The downside of tabulation is that you have to come up with an ordering.

*(this is actually only easy if you are writing the function yourself, and/or coding in an impure/non-functional programming language... for example if someone already wrote a precompiled fib function, it necessarily makes recursive calls to itself, and you can't magically memoize the function without ensuring those recursive calls call your new memoized function (and not the original unmemoized function))

Recursiveness

Note that both top-down and bottom-up can be implemented with recursion or iterative table-filling, though it may not be natural.

Practical concerns

With memoization, if the tree is very deep (e.g. fib(10^6)), you will run out of stack space, because each delayed computation must be put on the stack, and you will have 10^6 of them.

Optimality

Either approach may not be time-optimal if the order you happen (or try to) visit subproblems is not optimal, specifically if there is more than one way to calculate a subproblem (normally caching would resolve this, but it's theoretically possible that caching might not in some exotic cases). Memoization will usually add on your time-complexity to your space-complexity (e.g. with tabulation you have more liberty to throw away calculations, like using tabulation with Fib lets you use O(1) space, but memoization with Fib uses O(N) stack space).

Advanced optimizations

If you are also doing a extremely complicated problems, you might have no choice but to do tabulation (or at least take a more active role in steering the memoization where you want it to go). Also if you are in a situation where optimization is absolutely critical and you must optimize, tabulation will allow you to do optimizations which memoization would not otherwise let you do in a sane way. In my humble opinion, in normal software engineering, neither of these two cases ever come up, so I would just use memoization ("a function which caches its answers") unless something (such as stack space) makes tabulation necessary... though technically to avoid a stack blowout you can 1) increase the stack size limit in languages which allow it, or 2) eat a constant factor of extra work to virtualize your stack (ick), or 3) program in continuation-passing style, which in effect also virtualizes your stack (not sure the complexity of this, but basically you will effectively take the deferred call chain from the stack of size N and de-facto stick it in N successively nested thunk functions... though in some languages without tail-call optimization you may have to trampoline things to avoid a stack blowout).

More complicated examples

Here we list examples of particular interest, that are not just general DP problems, but interestingly distinguish memoization and tabulation. For example, one formulation might be much easier than the other, or there may be an optimization which basically requires tabulation:

the algorithm to calculate edit-distance[4], interesting as a non-trivial example of a two-dimensional table-filling algorithm


@coder000001: for python examples, you could google search for python memoization decorator; some languages will let you write a macro or code which encapsulates the memoization pattern. The memoization pattern is nothing more than "rather than calling the function, look up the value from a cache (if the value is not there, compute it and add it to the cache first)".
I don't see anybody mentioning this but I think another advantage of Top down is that you will only build the look-up table/cache sparsely. (ie you fill in the values where you actually need them). So this might be the pros in addition to easy coding. In other words, top down might save you actual running time since you don't compute everything (you might have tremendously better running time but same asymptotic running time though). Yet it requires additional memory to keep the additional stack frames (again, memory consumption 'may' (only may) double but asymptotically it is the same.
I am under the impression that top-down approaches that cache solutions to overlapping subproblems is a technique called memoization. A bottom up technique that fills a table and also avoids recomputing overlapping subproblems is referred to as tabulation. These techniques can be employed when using dynamic programming, which refers to solving subproblems to solve a much bigger problem. This seems contradictory with this answer, where this answer uses dynamic programming instead of tabulation in many places. Who is correct?
@Sammaron: hmm, you make a good point. I should have perhaps checked my source on Wikipedia, which I cannot find. Upon checking cstheory.stackexchange a bit, I now agree "bottom-up" would imply the bottom is known beforehand (tabulation), and "top-down" is you assume solution to subproblems/subtrees. At the time I found the term ambiguous, and I interpreted the phrases in the dual view ("bottom-up" you assume solution to subproblems and memorize, "top-down" you know which subproblems you are about and can tabulate). I will attempt to address this in an edit.
@mgiuffrida: Stack space is sometimes treated differently depending on the programming language. For example in python, trying to perform a memoized recursive fib will fail for say fib(513). The overloaded terminology I feel is getting in the way here. 1) You can always throw away subproblems you no longer need. 2) You can always avoid calculating subproblems you don't need. 3) 1 and 2 may be much harder to code without an explicit data structure to store subproblems in, OR, harder if control flow must weave between function calls (you might need state or continuations).
m
mja

Top down and bottom up DP are two different ways of solving the same problems. Consider a memoized (top down) vs dynamic (bottom up) programming solution to computing fibonacci numbers.

fib_cache = {}

def memo_fib(n):
  global fib_cache
  if n == 0 or n == 1:
     return 1
  if n in fib_cache:
     return fib_cache[n]
  ret = memo_fib(n - 1) + memo_fib(n - 2)
  fib_cache[n] = ret
  return ret

def dp_fib(n):
   partial_answers = [1, 1]
   while len(partial_answers) <= n:
     partial_answers.append(partial_answers[-1] + partial_answers[-2])
   return partial_answers[n]

print memo_fib(5), dp_fib(5)

I personally find memoization much more natural. You can take a recursive function and memoize it by a mechanical process (first lookup answer in cache and return it if possible, otherwise compute it recursively and then before returning, you save the calculation in the cache for future use), whereas doing bottom up dynamic programming requires you to encode an order in which solutions are calculated, such that no "big problem" is computed before the smaller problem that it depends on.


Ah, now I see what "top-down" and "bottom-up" mean; it is in fact just referring to memoization vs DP. And to think I was the one who edited the question to mention DP in the title...
what's the runtime of memoized fib v/s normal recursive fib?
Yeah it is linear! I drew out the recursion tree and saw what calls could be avoided and realized the memo_fib(n - 2) calls would be all avoided after the first call to it, and so all the right branches of the recursion tree would be cut off and it'll reduce to linear.
Since DP involves essentially building up a results table where each result is computed at most once, one simple way to visualize a DP algorithm's runtime is to see how large the table is. In this case, it's of size n (one result per input value) so O(n). In other cases, it could be an n^2 matrix, resulting in O(n^2), etc.
Yeah, pre-populating the cache to get rid of the base case works fine and simplifies the code. When I memoize functions, I tend to like to first write it recursively and then mechanically memoize it.
S
Sergey Orshanskiy

A key feature of dynamic programming is the presence of overlapping subproblems. That is, the problem that you are trying to solve can be broken into subproblems, and many of those subproblems share subsubproblems. It is like "Divide and conquer", but you end up doing the same thing many, many times. An example that I have used since 2003 when teaching or explaining these matters: you can compute Fibonacci numbers recursively.

def fib(n):
  if n < 2:
    return n
  return fib(n-1) + fib(n-2)

Use your favorite language and try running it for fib(50). It will take a very, very long time. Roughly as much time as fib(50) itself! However, a lot of unnecessary work is being done. fib(50) will call fib(49) and fib(48), but then both of those will end up calling fib(47), even though the value is the same. In fact, fib(47) will be computed three times: by a direct call from fib(49), by a direct call from fib(48), and also by a direct call from another fib(48), the one that was spawned by the computation of fib(49)... So you see, we have overlapping subproblems.

Great news: there is no need to compute the same value many times. Once you compute it once, cache the result, and the next time use the cached value! This is the essence of dynamic programming. You can call it "top-down", "memoization", or whatever else you want. This approach is very intuitive and very easy to implement. Just write a recursive solution first, test it on small tests, add memoization (caching of already computed values), and --- bingo! --- you are done.

Usually you can also write an equivalent iterative program that works from the bottom up, without recursion. In this case this would be the more natural approach: loop from 1 to 50 computing all the Fibonacci numbers as you go.

fib[0] = 0
fib[1] = 1
for i in range(48):
  fib[i+2] = fib[i] + fib[i+1]

In any interesting scenario the bottom-up solution is usually more difficult to understand. However, once you do understand it, usually you'd get a much clearer big picture of how the algorithm works. In practice, when solving nontrivial problems, I recommend first writing the top-down approach and testing it on small examples. Then write the bottom-up solution and compare the two to make sure you are getting the same thing. Ideally, compare the two solutions automatically. Write a small routine that would generate lots of tests, ideally -- all small tests up to certain size --- and validate that both solutions give the same result. After that use the bottom-up solution in production, but keep the top-bottom code, commented out. This will make it easier for other developers to understand what it is that you are doing: bottom-up code can be quite incomprehensible, even you wrote it and even if you know exactly what you are doing.

In many applications the bottom-up approach is slightly faster because of the overhead of recursive calls. Stack overflow can also be an issue in certain problems, and note that this can very much depend on the input data. In some cases you may not be able to write a test causing a stack overflow if you don't understand dynamic programming well enough, but some day this may still happen.

Now, there are problems where the top-down approach is the only feasible solution because the problem space is so big that it is not possible to solve all subproblems. However, the "caching" still works in reasonable time because your input only needs a fraction of the subproblems to be solved --- but it is too tricky to explicitly define, which subproblems you need to solve, and hence to write a bottom-up solution. On the other hand, there are situations when you know you will need to solve all subproblems. In this case go on and use bottom-up.

I would personally use top-bottom for Paragraph optimization a.k.a the Word wrap optimization problem (look up the Knuth-Plass line-breaking algorithms; at least TeX uses it, and some software by Adobe Systems uses a similar approach). I would use bottom-up for the Fast Fourier Transform.


Hello!!! I want to determine if the following propositions are right. - For a Dynamic Programming algorithm, the computation of all the values with bottom-up is asymptotically faster then the use of recursion and memoization. - The time of a dynamic algorithm is always Ο(Ρ) where Ρ is the number of subproblems. - Each problem in NP can be solved in exponential time.
What could I say about the above propositions? Do you have an idea? @osa
@evinda, (1) is always wrong. It is either the same or asymptotically slower (when you don't need all subproblems, recursion can be faster). (2) is only right if you can solve every subproblem in O(1). (3) is kind of right. Each problem in NP can be solved in polynomial time on a nondeterministic machine (like a quantum computer, that can do multiple things simultaneously: have its cake, and simultaneously eat it, and trace both results). So in a sense, each problem in NP can be solved in exponential time on a regular computer. SIde note: everything in P is also in NP. E.g. adding two integers
m
minhaz

Lets take fibonacci series as an example

1,1,2,3,5,8,13,21....

first number: 1
Second number: 1
Third Number: 2

Another way to put it,

Bottom(first) number: 1
Top (Eighth) number on the given sequence: 21

In case of first five fibonacci number

Bottom(first) number :1
Top (fifth) number: 5 

Now lets take a look of recursive Fibonacci series algorithm as an example

public int rcursive(int n) {
    if ((n == 1) || (n == 2)) {
        return 1;
    } else {
        return rcursive(n - 1) + rcursive(n - 2);
    }
}

Now if we execute this program with following commands

rcursive(5);

if we closely look into the algorithm, in-order to generate fifth number it requires 3rd and 4th numbers. So my recursion actually start from top(5) and then goes all the way to bottom/lower numbers. This approach is actually top-down approach.

To avoid doing same calculation multiple times we use Dynamic Programming techniques. We store previously computed value and reuse it. This technique is called memoization. There are more to Dynamic programming other then memoization which is not needed to discuss current problem.

Top-Down

Lets rewrite our original algorithm and add memoized techniques.

public int memoized(int n, int[] memo) {
    if (n <= 2) {
        return 1;
    } else if (memo[n] != -1) {
        return memo[n];
    } else {
        memo[n] = memoized(n - 1, memo) + memoized(n - 2, memo);
    }
    return memo[n];
}

And we execute this method like following

   int n = 5;
    int[] memo = new int[n + 1];
    Arrays.fill(memo, -1);
    memoized(n, memo);

This solution is still top-down as algorithm start from top value and go to bottom each step to get our top value.

Bottom-Up

But, question is, can we start from bottom, like from first fibonacci number then walk our way to up. Lets rewrite it using this techniques,

public int dp(int n) {
    int[] output = new int[n + 1];
    output[1] = 1;
    output[2] = 1;
    for (int i = 3; i <= n; i++) {
        output[i] = output[i - 1] + output[i - 2];
    }
    return output[n];
}

Now if we look into this algorithm it actually start from lower values then go to top. If i need 5th fibonacci number i am actually calculating 1st, then second then third all the way to up 5th number. This techniques actually called bottom-up techniques.

Last two, algorithms full-fill dynamic programming requirements. But one is top-down and another one is bottom-up. Both algorithm has similar space and time complexity.


Can we say bottom-up approach is often implemented in a non-recursive way ?
Nope, you can convert any loop logic to recursion
F
Farah Nazifa

Dynamic Programming is often called Memoization!

1.Memoization is the top-down technique(start solving the given problem by breaking it down) and dynamic programming is a bottom-up technique(start solving from the trivial sub-problem, up towards the given problem)

2.DP finds the solution by starting from the base case(s) and works its way upwards. DP solves all the sub-problems, because it does it bottom-up

Unlike Memoization, which solves only the needed sub-problems

DP has the potential to transform exponential-time brute-force solutions into polynomial-time algorithms. DP may be much more efficient because its iterative

On the contrary, Memoization must pay for the (often significant) overhead due to recursion.

To be more simple, Memoization uses the top-down approach to solve the problem i.e. it begin with core(main) problem then breaks it into sub-problems and solve these sub-problems similarly. In this approach same sub-problem can occur multiple times and consume more CPU cycle, hence increase the time complexity. Whereas in Dynamic programming same sub-problem will not be solved multiple times but the prior result will be used to optimize the solution.


that's not true, memoization uses a cache which will help you save the time complexity to the same as DP
A
Ashwin

Dynamic programming problems can be solved using either bottom-up or top-down approaches.

Generally, the bottom-up approach uses the tabulation technique, while the top-down approach uses the recursion (with memorization) technique.

But you can also have bottom-up and top-down approaches using recursion as shown below.

Bottom-Up: Start with the base condition and pass the value calculated until now recursively. Generally, these are tail recursions.

int n = 5;
fibBottomUp(1, 1, 2, n);

private int fibBottomUp(int i, int j, int count, int n) {
    if (count > n) return 1;
    if (count == n) return i + j;
    return fibBottomUp(j, i + j, count + 1, n);
}

Top-Down: Start with the final condition and recursively get the result of its sub-problems.

int n = 5;
fibTopDown(n);

private int fibTopDown(int n) {
    if (n <= 1) return 1;
    return fibTopDown(n - 1) + fibTopDown(n - 2);
}

no memoization or tabulation in 2nd approach?
@Pradeep, Of course, you can use memoization and/or tabulation with both approaches.
佚名

Simply saying top down approach uses recursion for calling Sub problems again and again where as bottom up approach use the single without calling any one and hence it is more efficient.


p
piyush121

Following is the DP based solution for Edit Distance problem which is top down. I hope it will also help in understanding the world of Dynamic Programming:

public int minDistance(String word1, String word2) {//Standard dynamic programming puzzle.
         int m = word2.length();
            int n = word1.length();


     if(m == 0) // Cannot miss the corner cases !
                return n;
        if(n == 0)
            return m;
        int[][] DP = new int[n + 1][m + 1];

        for(int j =1 ; j <= m; j++) {
            DP[0][j] = j;
        }
        for(int i =1 ; i <= n; i++) {
            DP[i][0] = i;
        }

        for(int i =1 ; i <= n; i++) {
            for(int j =1 ; j <= m; j++) {
                if(word1.charAt(i - 1) == word2.charAt(j - 1))
                    DP[i][j] = DP[i-1][j-1];
                else
                DP[i][j] = Math.min(Math.min(DP[i-1][j], DP[i][j-1]), DP[i-1][j-1]) + 1; // Main idea is this.
            }
        }

        return DP[n][m];
}

You can think of its recursive implementation at your home. It's quite good and challenging if you haven't solved something like this before.


J
JeeyCi

nothing to be confused about... you usually learn the language in bottom-up manner (from basics to more complicated things), and often make your project in top-down manner (from overall goal & structure of the code to certain pieces of implementations)