The result of my lab showed moderate improvement in reaction time. However this improvement was pretty marginal. There may have been too many uncontrollable variables (mood, blood chemistry, environment) that may have contributed to a slower response time. I was in a loud and non-quiet setting which may have adversely affected my time. Nevertheless, the marginal improvement may imply that we are not like machines. Chapter 2 of Gleitman does not explicitly describe the biological or psychological behavior of machines to humans, however it does discuss the topics of feedback control. This lab should prove that there is an improvement of a humans performance by experience. However, in a machine the action is controlled by a pre-determined stimulus. For example, if we had a machine that would perform the same task, we would first have to command it to close its fingers once it senses (by optical or RF means) that the ruler was past a certain threshold. One can then see that its performance would then be pretty consistent while a humans may improve or degrade depending on outside factors.
I like your observation on the environment that was given to you performing this lab, and its possible effect on the erratic feature in our data. However, I disagree a bit with your robot finger example. While it is true that the robot would move accordingly to its pre-determined stimulus, but they can be programmed to have feed-back system very easily. The robot in your example is absent of that function so it would result to be very consistent.
I have seen a machine "learn" in a very very form and it was a prosthetic leg given to a person who lost his leg, including the knee. The prosthetic had a clever motor where it controlled the movement and orientation of the foot pad by learning patient's moving pattern. This way, the machine was able to provide, not perfect, but increased comfort as well as adaptability to different terrain. I think one can say that they way this machine "learned" still differs from the learning we do. However, this may be because, since the machine was man-made, all the internal mechanics are known to us [we would label it "feed-back control system"], so it may seem awfully simple and "machine like", compared to the learning we do, but I think we might be the same way. It is just that we are unable to monitor the precise internal changes that occurs while we become better at catching the ruler. And "learning from experience" seem like only a sofisticated variation of learning by an internal "feed-back control system".