Iteration #1

This afternoon was spent doing iteration 1. Our initial task list was:

  • Get people up to speed on RubyFIT
  • Determine how to run FIT from within Eclipse
  • Perform a spike to determine feasibility of using the cgi library (through Apache)
  • Perform a spike to determine feasibility of using a basic built-in Ruby server
  • Determine how to deploy the server so our real world customer can view
  • Write pages & output result for single test

Now the test is done, and we have the following issues we talked about in our iteration retrospecitve:

  • Story was simpler — Our initial impression of the story was more complicated than it turned out to be
  • Delivered! We got the spikes done and the deployment proof of concept finished
  • Didn’t have a repeatable process — were a little lax about things as we did the spikes
  • Need cleanup of various things before concept is ready for real usage
  • We rushed to finish (the time structure of the day had us rushing to get things done at the end)
  • Pairing worked well — both for spikes and for deployment
  • FIT tasks not done — it was determined that FIT wasn’t going to be useful for the tasks today, so we moved the tasks to the end. We didn’t get to them in this iteration.
  • Manual customer test — we did a test manually on both spike results. Didn’t get it automated.
  • No unit tests (which we deemed ok for the moment, but which we will remedy as we stop doing proof of concept spikes)
  • Haven’t finished the process of determining which server method to use
  • Completed a round trip proof of concept
  • Planning game — some people felt like it was longish. We also discussed the possibility of running spikes prior to the planning game to help people visualize aspects. This does have the risk of biasing people towards certain paths, but Chris has found it useful and prefers it to planning then spiking. Brian added that the divide between abstract and concrete is based on the project, the people, with Brian tending to agree with Chris
  • Estimates are deemed to be good enough for moving forward, though we don’t have enough information to fully evaluate them at the moment
  • BC queried whether the risks we were spiking for were really risks — were we really unsure that the cgi approach would work, for example? Chris replied with a story about his client where they started with a spike for technology exploration at the beginning and how the initial spike helped get the development team to a base level of familiarity and comfort with the new technology. His developers were working with domain experts as well, and so doing it as a spike allowed the developers to not worry quite so much as the code. BC then asked that in Chris’ story, the technical risk was less the point than the information change. Chris clarified that there was technical risk — not so much as to whether the technology would work, but whether the team could work with the technology.
  • Discomfort with FIT came up several times. Brian suggested coming to a group consensus about what acceptance tests ought to look like for this project. Lisa suggested learning FIT first before we can decide. Brian amended his suggestion to be learn our tools before doing much on production code
  • Need to select an acceptance test approach
  • And thus ended iteration 1 (combined group discussion next…)

2 comments on “Iteration #1

  1. Re the fit stuff, is it feasible to hold off on FIT and start it later and have a small team continue on with it when the 2nd project is underway (and then cut over to using FIT on the 2nd project once it has stabilized?)

  2. Have you actually split into 2 groups already or are the initial spikes being done as a single group?

Comments are closed.