• on distinguishing memoization and dynamic programming

    From Julieta Shem@jshem@yaxenu.org to comp.lang.lisp,comp.programming on Wed Jan 3 16:53:40 2024
    From Newsgroup: comp.programming

    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Kaz Kylheku@433-929-6894@kylheku.com to comp.lang.lisp,comp.programming on Wed Jan 3 20:06:39 2024
    From Newsgroup: comp.programming

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a mathematical definition of each one?

    Did you check Wikipedia?

    https://en.wikipedia.org/wiki/Dynamic_programming

    Dynamic programming is an "algorithmic paradigm" according to this page;
    a nice term.

    Memoization is a specific algorithmic trick, which is used in some
    solutions that fall into the dynamic programming paradigm.
    (It is used essentially, so that practically useful run-times
    can be achieved: e.g. exponential time knocked down to polynomial.)

    Dynamic programming breaks a larger problem into sub-problems which
    can be solved separately and then integrated to solve the
    larger problem.

    Memoization helps when the recursion leads to overlapping subproblems
    that lead to an exponential explosion if the duplication is not
    identified and suppressed.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Kaz Kylheku@433-929-6894@kylheku.com to comp.lang.lisp,comp.programming on Wed Jan 3 20:16:36 2024
    From Newsgroup: comp.programming

    On 2024-01-03, Kaz Kylheku <433-929-6894@kylheku.com> wrote:
    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?

    Did you check Wikipedia?

    https://en.wikipedia.org/wiki/Dynamic_programming

    Dynamic programming is an "algorithmic paradigm" according to this page;
    a nice term.

    By the way, this "programming" does not refer to writing a computer
    program, but to finding a solution that can be used to schedule
    a program of events.

    That there is a dynamic programming algorithming paradigm doesn't
    have anything to do with that we write programs to make it happen.

    This explains the "programming" term: https://en.wikipedia.org/wiki/Mathematical_optimization#History

    There is another kind of "programming" in mathematical optimization: https://en.wikipedia.org/wiki/Linear_programming

    That one does not have a related algorithmic paradigm; the computer
    version is just number-crunching over the math.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Julieta Shem@jshem@yaxenu.org to comp.lang.lisp,comp.programming on Wed Jan 3 17:55:37 2024
    From Newsgroup: comp.programming

    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a
    mathematical definition of each one?

    [...]

    Dynamic programming breaks a larger problem into sub-problems which
    can be solved separately and then integrated to solve the
    larger problem.

    I can't distinguish this definition from ``recursive''.

    Memoization helps when the recursion leads to overlapping subproblems
    that lead to an exponential explosion if the duplication is not
    identified and suppressed.

    So it seems to be that memoization is a particular kind of strategy that
    falls in the dynamic programming set of strategies. (Thanks for the
    historical addendum in your other post.)

    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT, memoization is only useful when at least two of the subproblems are
    exactly the same.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Kaz Kylheku@433-929-6894@kylheku.com to comp.lang.lisp,comp.programming on Wed Jan 3 22:58:19 2024
    From Newsgroup: comp.programming

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT, memoization is only useful when at least two of the subproblems are
    exactly the same.

    The famous example is Fibonacci. If you calculate fib(7) recursively,
    fib(3), and others, will show up more than once in the recursion:

    fib(7)
    / \
    fib(6) fib(5)
    / \ / \
    fib(4) fib(5) fib(4) fib(3)
    / \ / \
    fib(4) fib(3)
    / \ / \

    Why is that called overlapping? Because the left subtree fib(6)
    and fib(5) are not the same, but they contain some common content
    (nodes that are exactly the same like another copy of fib(5), and
    multiple fib(4) and so on).

    It's just in contrast to divide-and-conquer, where the problem
    space is being strictly partitioned; no part or sub-part of the
    left tree occcurs in the right or vice versa.

    [0, 10] and [5, 15] overlap, and they have [5, 10] in common.
    If that can be solved as a sub-problem, such that we can solve
    [0, 4], [5, 10] and [11, 15], and put them together,
    that would be better than solving [5, 10] twice and doing
    the same thing.
    --
    TXR Programming Language: http://nongnu.org/txr
    Cygnal: Cygwin Native Application Library: http://kylheku.com/cygnal
    Mastodon: @Kazinator@mstdn.ca
    NOTE: If you use Google Groups, I don't see you, unless you're whitelisted.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Julieta Shem@jshem@yaxenu.org to comp.lang.lisp,comp.programming on Wed Jan 3 20:19:07 2024
    From Newsgroup: comp.programming

    Kaz Kylheku <433-929-6894@kylheku.com> writes:

    On 2024-01-03, Julieta Shem <jshem@yaxenu.org> wrote:
    Why do they say ``overlapping subproblems'' when it seems that what is
    meant is a duplicate problem? For instance, the interval [0, 10]
    overlaps with the interval [5, 15], but they're not the same. AFAICT,
    memoization is only useful when at least two of the subproblems are
    exactly the same.

    The famous example is Fibonacci. If you calculate fib(7) recursively,
    fib(3), and others, will show up more than once in the recursion:

    fib(7)
    / \
    fib(6) fib(5)
    / \ / \
    fib(4) fib(5) fib(4) fib(3)
    / \ / \
    fib(4) fib(3)
    / \ / \

    Why is that called overlapping? Because the left subtree fib(6)
    and fib(5) are not the same, but they contain some common content
    (nodes that are exactly the same like another copy of fib(5), and
    multiple fib(4) and so on).

    It's just in contrast to divide-and-conquer, where the problem
    space is being strictly partitioned; no part or sub-part of the
    left tree occcurs in the right or vice versa.

    [0, 10] and [5, 15] overlap, and they have [5, 10] in common.
    If that can be solved as a sub-problem, such that we can solve
    [0, 4], [5, 10] and [11, 15], and put them together,
    that would be better than solving [5, 10] twice and doing
    the same thing.

    That's very clear now. Wonderful. Thank you.
    --- Synchronet 3.20a-Linux NewsLink 1.114
  • From Tim Rentsch@tr.17687@z991.linuxsc.com to comp.programming on Fri Jan 12 16:41:52 2024
    From Newsgroup: comp.programming

    Julieta Shem <jshem@yaxenu.org> writes:

    I was trying to distinguish memoization from dynamic programming --- in
    a technical way --- and I failed. Can you write something like a mathematical definition of each one?

    Memoization is a stand-alone technique for improving the performance
    of pure functions, which is to say functions whose result depends
    only on their inputs. The idea is when a result is calculated for a
    particular set of inputs, the result is saved so that it needn't be
    calculated again for the same set of inputs. If lookup is cheaper
    than recomputing, it's a win to save the result (up to some amount
    of saved results, obviously) so it doesn't have to be calculated
    again. By analogy, memoization is more or less a software cache at
    the level of individual functions.

    Note that there needn't be any particular relationship between the
    inputs and the results. A function might be recursive, like the
    fibonacci function, or the different results might be completely
    independent, with no recursion at play. Memoization doesn't care -
    the only things that matter is that the output is a pure function
    of the inputs, and that lookup is cheaper than recalculation. When
    those two conditions hold, memoization can improve performance (as
    before, depending on how much storage is needed to accomplish that).

    Dynamic programming is used when there is a single large-scale
    problem for which it is necessary to solve a significant number of smaller-scale problems, and there are relationships between the
    smaller problems such that there are dependencies on those smaller
    problems that force some to be solved before others. The key idea
    of dynamic progamming is to determine, based on an analysis of the
    overall problem, an ordering for the smaller problems that allows
    each smaller problem to be solved exactly once. It is common for
    dynamic programming problems that the smaller problems make up a
    regular array (of whatever dimensionality) so that results can be
    stored in a regular structure, and looking up previously solved
    subproblems can be done just by doing a normal indexing operation.

    An abstract example of a dynamic programming problem is calculating
    a function F of three arguments I, J, K, where computing F depends
    on several values of F( {i values}, {j values}, {k values} ), where
    each {i value} is <= I, each {j value} is <= J, and each {k value}
    is <= K. By tabulating F in increasing order of I+J+K, we arrange
    for the earlier values to be available when computing F(I,J,K).

    Incidentally, it can happen that F(I,J,K) is a function of lesser
    values of i, j, or k, and also on F(I,J,K) itself. When that
    happens we get an /equation/ that must be solved for F(I,J,K), and
    the equation can be solved because values for all the dependent
    (smaller) sub-problems are already known.

    I wanted to answer your question because memoization and dynamic
    programming are really nothing like each other, I thought it would
    be good to clarify the two concepts. Hopefully you got some value
    out of my explanations here.
    --- Synchronet 3.20a-Linux NewsLink 1.114