Here is some context to start with.
I know of two ways to define a procedure, say to square a given number, given below:
;; Form #1 (define (square x) (* x x)) ;; Form #2 (define square (lambda (x) (* x x)))
Neither definition maintains state and both yield identical results for identical inputs. So far so good. Now consider the following procedure definition.
(define f (let ((state 1)) (lambda (x) (set! state (* state x)) state))) (+ (f 0) (f 1)) ;; returns 0 ;; Reset interpreter and run with order of arguments to + reversed (+ (f 1) (f 0)) ;; returns 1
This procedure is easily explained. It maintains a local state variable initialized to 1. Inputs to the procedure are multiplied by the state, the existing state variable is replaced by the product and the new value of the state variable is returned. Consequently if the input to the procedure is ever 0 then the procedure will return 0 for all subsequent inputs. The state is not in the global environment but a local environment "attached" to the procedure. This local environment is easier to visualize when the
let
construct is replaced with an equivalent lambda
expression.
(define f ((lambda (state) (lambda (x) (set! state (* state x)) state)) 1))
My problems began when I tried to convert this definition to the "regular" form, i.e.
(define (f x) ...)
. I tried the following, with and without the let
construct:
(define (f x) (let ((state 1)) (set! state (* state x)) state))
I was not sure if this would work; evaluating the test expressions quickly confirmed my suspicions. The two definitions were NOT equivalent. Unlike the first form the "regular" form did not seem to maintain state across invocations. Here is my understanding of what is happening.
When the procedure body is evaluated the value of state is set to 1 in the corresponding frame. The state is indeed changed to the product using the
set!
call. However nothing is changed in the "nearest" environment, which is the global environment. The frame is discarded when the procedure terminates and the state is lost. In the earlier case the attached local environment stayed alive even after the procedure terminated thereby enabling it to preserve state.
To test my theory I moved the state variable to the global environment. Everything worked as expected thereafter.
(define state 1) (define (f x) (set! state (* state x)) state)
My explanation notwithstanding, two questions remain.
- Is my understanding correct?
- Is it possible to convert the procedure from one form to another without resorting to defining state globally? If so, how?
PS: Thanks to Magicindian for helping me to derive my questions from what started out as a vague tangent of the solution to an SICP exercise. He has a particular talent for distilling clear and concise ideas from tangled masses of thoughts. He gave me several useful inputs which I neglected to write down. I promise to keep notes the next time I have interesting discussions with my friends and report them along with the final summary.