NARHAMS logo NAR logo
NARHAMS
Model Rocket Club, Section #139
Serving Maryland and the Washington Metropolitan Area
2000 - 2001 and 2003 - 2004 National Championship Section

Facebook

Group Email

Problems with our site?
Contact the webmaster

Green Web Hosting! This site hosted by DreamHost.

Last modified:
12/31/1969 16:00:00

Static Judging of Like Models

Ed Pearson, March 2022

This is based on past Rocket Runs' (RR) static judging (see footnote 1). It shows one way to compare and rank similar models.


A jar of Estes Mosquitoes -- Similar Models to Evaluate


Estes Mosquito (left) and 14mm Cousins -- Dissimilar Models

What follows is the RR static assessment schema.

The schema consists of an eligibility assessment and a dozen+ categories to evaluate. A judge examines a model, assures it is eligible, and assigns a numeric value to each category. A range of category values (i.e., choices) is specified to assist the judge. The chosen values are summed and that total becomes the model's score. Scores are sorted/ranked for all eligible models. The highest compared score reflects the "best" model and the ranking-range determines other places.

It takes time to explain eligibility assessment and this is addressed in footnote 2. Someone who takes time to build and enter an ineligible RR model deserves reasonable rationale why the entry was excluded.

Here are the dozen+ categories chose to evaluate, short narratives of each, the categories ranges of values, and what-to-do guidance for value assignments.

  1. Fins Rotational Alignment
    The Mosquito is a three-finned model. Thus fins are expected to be spaced equally 120-degrees apart. A scoring range is -1 to +1. A marked template is used to assist judging. Models having fins that are clearly off the template receive the -1 value. Fins may appear to match the template, but when the model is rotated no longer seem to match. For these models a 0 is assigned. A perfect template match, even if the model is rotated, receives a +1. If you have a fin jig, you can use it in lieu of a printed template.

  2. Fins Vertical Alignment
    Fins should be attached parallel to the model's longitudinal axis. Values for scoring the range are -1 to +1. Use a fin jig or sight along the edges of fins to see that the edges project (extended) lines that bisects equally the nosecone. Assign a -1 to models where fins are clearly misaligned. If there is some deviation but mainly aligned give a 0 value. Give a +1 to models having perfect vertical alignment.

  3. Fins Perpendicularly Aligned
    Fins should be perpendicular to a plane that is tangent to the body tube. The category value range is -1 to +1. Use a fin jig to check alignment, or look at a fin's attachment line face on. You should see only the fin's edge, not either side or a hint of a fin's side. Check the other fins too. Assign -1 if more than one fin is not perpendicular to the body. Give a 0 if there seems to be some misalignment and a +1 if all three fins are perpendicular to the body.

  4. Fins Horizontally Aligned
    Models should sit straight -- not cant or lean -- when placed upon a table. Leaning occurs when a fin is attached higher or lower on a body tube in relation to other fins -- the fins are on different horizontal planes. If you espy fin-attachment height differences or canting, assign a -1 value. Give a 0 -- a normal case -- to a model which appears to sit level (longitudinal axis is perpendicular to the model's horizontal plane).

  5. Fins - Airfoils/Construction
    It is unnecessary to sand Mosquito kit fins, or substitute materials. Yet, nicely sanded fins or use of other materials take time and reflect craftsmanship. The judging values are -1 to +2. Give a -1 to unevenly sanded/airfoiled fins. If fin edges are not sanded or merely evenly rounded, assign a 0 value. Give a +1 to fins evenly sanded and airfoiled, or well constructed. In rare cases where the airfoiling or construction is exceptional, assign a +2.

  6. Fillets
    Fillets reflect one's workmanship -- the time and care one puts in the model. Fillets should be neat and even. The judging values are -1 to +2. Give a -1 to a model where fillets seem to distract from the model's appearance, i.e., sloppy work. If the model has no fillets (on fins and lug) assign a 0. If filleting is only somewhat uneven/untidy but does not detract from the model's appearance (an average job), assign a +1 value. Give a +2 to models with great fillets.

  7. Fin Surface
    Wood grain should be covered/painted and the surface smooth and even. Category evaluation range is -2 to +2, with -2 going to unfinished fins. Give a -1 for surfaces showing a lot of grain or only fair coverage (smoothness). Assign 0 to models showing some grain or lack of surface smoothness. Give a +1 if grain is filled/covered but improvement can be seen in surface smoothness, or good smoothness but hints of wood grain. Assign +2 for an outstanding fin surface on all three fins.

  8. Body Tube Seams
    They are unseemly for the "best" model. The category value range is -2 to +2. Assign -2 when seams are left as is (unfilled). Assign -1 to a poor job of filling/covering. Give a 0 to what seems to be average work. A good job at filling/covering seams rates +1. Assign a +2 to what you feel is superior work in filling/covering. Consider the seams on the launch lug too as part of this category.

  9. Nosecone Seam
    The transition between the body tube and nosecone is a chosen evaluation category, and the assigned values range from 0 to +2. It is normal to see the transition -- especially when it is highlighted by different colors -- but a really great job (a hidden transition) merits the +2. The +1 goes to models with a faint hint of transition not due to color change. Otherwise assign 0.

  10. Blemishes
    Flaws, including dings, glue drips, or presentation imperfections, detract from a model's aesthetics. The category value range is -1 to +1. Assess a -1 to notable blemishes. Give the +1 to no imperfections noticed and the 0 if you observe only minor flaws.

  11. Colors or Decorations
    How one finishes a model reflects the time spent and workmanship. The assessment range is -1 to +2. An unfinished model earns a -1. If the model is a single color, that is a normal occurrence, and earns a 0. Two colors get a +1 value. More than two colors (3 or greater) on the model receive a +2.

  12. Paint/Decoration Neatness
    A judge looks for bleeding, evenness, runs, covering overlaps/peeling and just how well the model is finished. The evaluation values can be -1 to +1 with poor painting/decorating getting the -1. If there are some issues, chalk that up as being normal and give 0 points. Assess +1 to outstanding painting/decorating.

  13. Other
    Realize that issues may arise that are unanticipated and thus otherwise unaddressed. Address issues you feel affects the judging with a value range of -1 to +1. A -1 is assigned to a negative previously unconsidered issue. For an issue not otherwise judged, but which you feels adds to the model's evaluation, assign a +1. If no unanticipated issue applies, give the 0. If there are more separate unanticipated issues, a second Other category may be added/rated; but no more than two, lest these be misused for justifications.

When scores are tied, say in the five highest rankings, a tiebreaker is used. Amongst the contenders (tied scores) the higher ranking goes to the model having a higher Category 10 (Blemishes) value. If a tie remains, the higher individual Category 11 (Colors or Decorations) judged value further winnows the tied field. If there is still a tie, a third tiebreaker is dropped. Instead, the ranking (place) is shared between contenders.

This writeup shows how the "best" RR model is determined. Despite attempts at objectivity and fairness, the judging is still subjective...subjective in choosing evaluation categories, the weighting (value range), what the judge decides, and arbitrary on how how ties are broken. Your feedback can help improve objectivity...a goal. A methodical schema helps somewhat to reduce subjectivity. For example when I've judged, it is only when values are summed and scores ranked, that I learn which is the "best" model and the other placings. This differs from alternate judging approaches (not gone into here) which may yield quicker rankings and have a greater validity/objectivity issue. Having multiple judges may help make the activity seem less subjective, off-set individual judging errors/omissions, and adds time to the judging. Determining results timely is frequently a judge's bugaboo and only mentioned here; budget 1.5 hours to check 12 models. Before getting afield in considerations, hopefully you have found this insightful.

Footnote 1
RRs are club events kindred to Easter Egg hunts using model rockets. We've used small fleets of Estes Mosquitoes, and children attempt to recover and keep found models. Children must be kept at bay until models are flown; Mosquitoes must not overfly spectators or participants -- since they kick engines and may streamline in.

Prior to launch day, the donated Mosquitoes are judged to determine the "best" ones. This shows how this static evaluation is performed.

This is but one way to compare like models; it may offer some ideas for organizers planning similar events.

Footnote 2
Eligible models are ones that are flyable, siblings, and perceived as safe. Flyable means, if prepped, the presented models could be flown. Ineligible examples are models where engines will not fit, models without a launch lug, or models missing a fin(s) -- these aren't made up; I've seen these in this or other static events, or at checkins.

Siblings mean brothers or sisters (i.e., like models)...but not necessarily half-brothers/sisters. Otherwise they are ineligible, such as the absurd case of entering an Alpha into a Mosquito contest. Cousins such as the Gnat, Lunar Bug, Quark or Swift should not be ranked or awarded a place either in a Mosquito static-judging event. Half-brothers/sisters are models that started as Mosquito kits but have been adapted/changed. My rule of thumb for evaluating a half-sibling is to ask oneself if a silhouette of the entry was presented, would I be able to identify the model as a Mosquito that has been adapted or is a new model being presented? An adaptation which makes the model unrecognizable as a Mosquito is ineligible for ranking/placement.


Adapted Mosquito (built by John Larson)

An eligible model satisfies the perception that its flight be will be stable and the recovery will be non-hazardous. If unsure of the model's flight safety, consider the model ineligible for the static assessment or flying in the RR.

So, to be eligible for static judging, an entry has to be ready, be similar to the other entries, and seen as safe.