Discussion of Harvest Modeling Project

Columbia Basin Research hosts this web page for archival purposes only. This project/page/resource is no longer active.

Welcome to the discussion area for the Harvest Modeling Project. The purpose of this project is to develop a model for the harvest and migration of salmon on the Pacific coast of North America which should assist managers in evaluating stock specific harvest policies.

29. Jan. 19, 1999 Meeting Minutes by Jim Norris, 1/29/99

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject: Minutes of the January 19, 1999 Model Committee meeting


Contents:

1. Attendance list.
2. Fishing mortality code update.
3. Results of multi-phase catch ceiling algorithm test.
4. Safe-guarded secant equation solver.
5. Production processes code update.
6. Input/output methods and utilities.
7. Report on PSC CTC project with UW and ESSA.
8. Next meeting.


1. Attendance List.

Norma Jean Sands (ADFG)
Troy Frever (UW)
Jim Norris (UW)
Marianne McClure (CRITFC)
Din Chen (CDFO)


2. Fishing mortality code update.

Troy reported that harvest algorithms are now in place to duplicate all features of the 1995 version of the CTC Chinook Model except the "multi-phase" catch ceilings. The "multi-phase" catch ceiling algorithm will require an algorithm to span multiple time steps. The program uses the following objects to compute fishing related mortalities:

* Fishery. This object stores data about each fishery, including a list of FisheryUnits. Each fishery object also has a method to "takeHarvests."

* FisheryUnit. A Fishery can encompass several regions, so a FisheryUnit object is created for each one. Each FisheryUnit has pointers to the Fishery and Region objects it represents, along with a list of HVMort objects (one for each cohort that can be harvested within the FisheryUnit). Each FisheryUnit also has a "Harvest(E)" method, for which an effort scalar "E" is an argument.

* HVMort. This ("Harvest Mortality") object represents the interaction of fishing activity on a single cohort in a single region. It has pointers to the fishery, region, and cohort it represents. It also has a HarvestProcess object to describe mathematically the interaction between fishing effort and the cohort. Its most important method is "Harvest(E)", which directs the computation of fishing mortalities for the associated cohort.

* HarvestProcess. Each HarvestProcess object is derived from an abstract base class that contains a method called "ComputeCatch(E)", where the relative effort scalar "E" is passed in as an argument. The method can differ for each object. For example, when configured to simulate the PSC Chinook Model, all harvest processes use the following equation for legal catch (from PSC chinook model subroutine CatchByFish):

   C = N*HR*E*(1-PNV)

where

   C   = legal catch;
   N   = cohort abundance;
   HR  = base period harvest rate;
   E   = relative effort level (ie "FP" in CatchByFish);
   PNV = proportion non-vulnerable.

In a different configuration, some HarvestProcess objects might use an equation like:

   C = N*(1 - exp(q*E))

where

   q = catchability coefficient;
   E = relative effort level.

The important point is that each HVMort can have a different type of ComputeCatch(E) equation, including algorithms for selective fisheries, bag limit fisheries, etc.

* FisheryManager. A model configuration has only one FisheryManager object. This object stores the management policies assigned to each FisheryUnit. For example, one FisheryUnit might have a fixed harvest rate policy, but another might have a forced catch quota policy, while a third might have a fixed escapement policy. The FisheryManager has a method to "takeHarvests".

* FisheryPolicy. Each FisheryPolicy object is derived from an abstract base class that has an EffortScalar (E) as its most important data item. It also has "harvest" method that contains the algorithm for adjusting the effort level to meet the policy goal (eg achieving a catch quota).

* HarvestManager. A model configuration has only one HarvestManager object. This object stores all the parameters required by the HarvestProcess objects, such as the base period harvest rates, FPs, and PNVs.

During program execution, the FisheryManager is called during each timestep. If active during that timestep, it computes fishing mortalities as follows:

a. Loop through the list of Fishery objects in the model and instruct each fishery to compute harvests;

b. Each fishery loops through its list of FisheryUnits and directs each FisheryUnit to compute harvests;

c. Each FisheryUnit asks the FisheryManager to pass back the FisheryPolicy it should use during this timestep;

d. The FisheryPolicy then directs the FisheryUnit to compute its fishing mortalities using effort level E (E = 1 on the first pass);

e. The FisheryUnit loops through its list of HVMort objects and directs each to compute its fishing mortalities using effort level E;

f. Each HVMort object passes the effort level E into the ComputeCatch method of its HarvestProcess object to compute the fishing mortalities for each cohort.

g. Once all the HVMort objects for a FisheryUnit have computed their harvests, the FisheryPolicy object uses it "harvest" algorithm to make any adjustments to the effort level. If adjustments are required to meet the policy objective, steps d through f are repeated until the objective is satisfied.


3. Results of multi-phase catch ceiling algorithm test.

Some fisheries in the PSC Chinook Model that are controlled by catch ceilings have harvests in both the preterminal and terminal time steps (we call these multi-phase ceiling fisheries). The algorithm used to compute catches for these fisheries assumes (1) a single catch ceiling that covers both time steps, and (2) that the input fishing effort levels in both time steps are adjusted by the same relative amounts in order meet the catch ceiling. There is no analytical solution for catches in multi-phase ceiling fisheries, because any change in the preterminal effort changes the stock abundances vulnerable to terminal effort. Instead, the algorithm iterates over both time steps, and on each iteration scales the effort levels in the preterminal and terminal timesteps by the same relative amount until the total catch from both timesteps equals the desired catch ceiling.

The NMFS Coast Model allows catch ceilings to be specified for individual timesteps and regions, but does not have an algorithm to ensure equal relative effort levels across timesteps. Thus, the NMFS Coast Model algorithm works fine for ceilinged fisheries that have catches only in one timestep/region (we call these single-phase ceiling fisheries), but does not work for multi-phase ceiling fisheries (i.e., does not always give the same catches as the PSC Chinook Model).

We were interested in knowing the extent of the differences between the two algorithms. This report describes a simple, but informative, experiment to gain insight into the differences.

We prepared a *.cei file that included 16 fisheries, four of which were multi-phase ceiling fisheries (Alaska Net, Northern Net, Central Net, and WCVI Sport). The base period was defined to be 1979-1984. From 1985-1994, ceilings were forced (i.e., model catches were required to equal the ceiling exactly) for all but two fisheries (North and South Puget Sound Sport), which were forced from 1985-1993. Catch ceilings during the period 1995-1997 were unforced and set extremely high to simulate fishery policy (FP) control (i.e., model catches were always below the ceiling and were not adjusted upward to equal the ceiling; thus, the ceilings had no effect). For 1998 and beyond, ceilings were unforced and set to the average 1991-1994 catches.

Both the PSC Chinook Model and the NMFS Coast Model were run using the same *.cei file data. A catch file (*cat.prn) and escapement file (*esc.prn) were printed for each model. The absolute values of the catch and escapement differences between models were computed.

Tables 1 and 2 give the percentage change from the PSC Chinook Model to the NMFS Coast Model for catches and escapements. (These tables are not included in the email version of the minutes, but will be available on web site discussion page.) The catch results are summarized below:

· Both models gave the same results for the base period 1979-1984;
· Catches in all ceilinged fisheries were the same when the ceilings were forced;
· Catches in all ceilinged fisheries were not the same when ceilings were unforced and set very high (1995-1997);
· Beyond 1997, catches in some ceilinged fisheries were the same, but in others were off by small amounts (the primary exception was the WCVI Sport fishery which had catches up to 10% off);
· Catches in all non-ceilinged fisheries were not the same for all years beyond the base period.

The escapement results are summarized below:

· Escapments were different for all years beyond the base period;
· The greatest differences were in the WCVI stocks (RBT and RBH), which were up to 14% off during the period 1985-1993;
· The GSQ stock also was off by up to 12% during the period 1985-1994.

When ceilings are forced, the PSC Chinook Model algorithm maintains a constant relative EFFORT level between the preterminal and terminal timesteps, whereas the NMFS Coast Model algorithm maintains a constant relative CATCH level between the preterminal and terminal timesteps. Thus, as long as catch ceilings are forced, both algorithms give the same total catch, even for the multi-phase ceiling fisheries (e.g., 1985-1994 period in Table 1). However, each algorithm will distribute the total catch differently between the preterminal and terminal timesteps for the multi-phase ceiling fisheries due to the different assumptions. The difference in catch distribution in the multi-phase ceiling fisheries affects the relative abundances of the stocks, leading to differences in catches in non-ceilinged fisheries and to differences in spawning escapements for individual stocks. The abundance and escapement differences are most pronounced for stocks that are heavily harvested by a multi-phase ceiling fishery, such as the RBH and RBT stocks in the WCVI Sport fishery.

When ceilings are not forced, the effects on catches can be variable, depending on the level of the ceiling. If the ceiling is very low such that the unconstrained model catch in a multi-phase ceiling fishery is always above both the associated preterminal and terminal ceilings in the NMFS Coast Model, the net effect is the same as when ceilings are forced because the catches always have to be reduced to meet the ceiling. For example, during the period 1998-2005 the catches for fisheries 1-3, 5-9, and 21 were identical for both models. For the other ceilinged fisheries, the catches were generally different by a small amount (less than one percent), except for the WCVI Sport fishery, which had differences up to 10%.

The WCVI Sport fishery is an interesting case that deserves further discussion. During the period of forced ceilings (1985-1994), the total catch in the WCVI Sport fishery is the same under both algorithms, but the distribution of the catch is probably very different. This large difference in distribution leads to the large differences in escapements for the RBH and RBT stocks during this period. Once the ceilings become unforced, the catch differences between the two algorithms are large due to the large differences in stock abundances. One unexplained anomaly is that the catch differences are very small during the first three years (1995-1997) of the unforced ceilings.



4. Safe-guarded secant equation solver.

Since each HarvestProcess object can have a different type of algorithm to compute legal catch at the cohort level, a FisheryPolicy object that has a catch quota objective requires a general algorithm that will adjust the effort level in a fishery to meet the quota. Mathematically, we have the following problem:

Find E such that

   C = F(E) = K

where

   C    = total catch in a fishery;
   E    = relative effort level in the fishery;
   K    = desired catch quota;
   F(E) = unknown function.

The unknown function F(E) is the sum of all the unknown HarvestProcess functions for each cohort. The problem can be restated as follows:

Find E such that

   G(E) = F(E) - K = 0.

David Salinger (applied mathematician with Columbia Basin Research) has written and tested code to solve this problem, using what is called the "Safe-Guarded Secant Method." The code is similar to the secant method used in Function RTSEC in Numerical Recipes. One nice feature of this algorithm is that if F(E) is linear (as is the case for the PSC Chinook Model), then the safe-guarded secant method solves in a single iteration, just as the PSC Chinook Model algorithm does. This algorithm will be incorporated into the code in the near future.


5. Production processes code update.

Jim reviewed the different types of production processes used in the PSC chinook model. The two main categories are "hatchery" and "natural", and within each of these categories are "no enhancement" and "enhancement" options. Jim showed that, for all but one stock, the overall production process for a given stock in a given year can be described as either a Ricker function, a Linear function, or some combination of these two functions for different segments of the available spawners. One stock uses natural production with enhancement (i.e., supplementation) and requires a separate algorithm called "EnhancedRicker".

In the new code, each stock has a separate ProductionProcess object for each year. This object contains all the information to compute AgeOneFish from available spawners. In the PSC Chinook Model, the data required to compute production for a given stock may be scattered among several files. In the new code, all the information is consolidated into a single data file. A utility program gathers data from existing files and creates a new token based data file in which the production process for each stock in each year is given as a combination of functions. Examples follow:

* Hatchery Production, No Enhancement:

  Year 1985        #For stock: SPR
    Production Linear  0  6293.42990413514  0.24763225  164.186011245499 
  end Year


Linear Parameters:

  Minimum spawners (e.g., 0)
  Maximum spawners (e.g., 6293.4299)
  EV scalar        (e.g., 0.2476)
  Slope            (e.g., 164.186)


* Hatchery Production, With Enhancement:
  Year 1991        #For stock: RBH
    Production Linear  0  6472  0.10422118  250.635577084189 
    Production Linear  6472  14073.0207135901  0.10422118  94.0663138467215 
  end Year


NOTE: Slope of function is lower when spawner exceed 6472 (i.e., 250.54 vs 94.07).


* Natural Production, No Enhancement:

  Year 1984        #For stock: AKS
    Production Ricker  0  13148.5429992893  0.0727296397  1.4  18407.960199005  0.173581873724499 
  end Year


Ricker Parameters:

  Minimum spawners (e.g., 0)
  Maximum spawners (e.g., 13148.5)
  EV Scalar        (e.g., 0.0727)
  Ricker A         (e.g., 1.4)
  Ricker B         (e.g., 18407.96)
  RecruitAtAgeOne  (e.g., 0.1736)



* Hatchery Production, With Enhancement, Excess Natural Spawners:

  Year 1983        #For stock: GSH
    Production Linear  0  5318  0.811555445  101.088866871763 
    Production Linear  5318  7387.09069865999  0.811555445  17.5666110352434 
    Production Ricker  7387.09069865999  12387.09069866  0.811555445  4.616  79656.4622144751  0.18442657373674 
  end Year



* Natural Production, With Enhancement:

  Year 1991        #For stock: GST
    Production EnhancedRicker  0  24822.830231996  0.271089375  3.209  79656.4622144751  0.18151375096647  1  4.616  0.0659  0.3  4604976 
  end Year


Enhanced Ricker Parameters:

  Minimum spawners       (e.g., 0)
  Maximum spawners       (e.g., 24822.8)
  EV Scalar              (e.g., 0.271)
  Ricker A               (e.g., 3.209)
  Ricker B               (e.g., 79656)
  RecruitAtAgeOne        (e.g., 0.1815)
  Density Dep Flag       (e.g., 1 = density dependence on)
  Productivity Parameter (e.g., 4.616)
  SmoltAtAgeOne          (e.g., 0.0659)
  MaxBroodProportion     (e.g., 0.3)
  SmoltProductionChange  (e.g., 4604976)



Troy also has created a CohortGenerator that is used to determine how many distinct cohorts to divide the AgeOneFish into. Currently, there is just one cohort. However, this option will be important for selective fishery management when fish from a given brood year may be divided into more than one distinct cohort. Future versions of the model may require input parameters to specify how smolts from a given brood are to be modeled as separate cohorts for sex or growth, for example.


6. Input/output methods and utilities.

Jim Norris distributed and discussed the following memo on a proposed data standard for salmon models (this memo also was posted on the discussion page in November).

------ Start Memo --------------

Proposed Salmon Model Data Communication Standard

Background.

Many computer programs and models are used by Pacific salmon researchers and managers. There is a need to communicate information between programs and from individual programs to users. Currently (Nov 1998), there are three general data communication methods:

(1) raw data from one program is used by other programs for further analysis (e.g., FRAM to TAMM communication);

(2) raw data from one program is used by auxiliary programs to create formatted reports (e.g., PSC Chinook Model raw data dump); or

(3) each program creates model specific formatted reports (e.g., PSC Chinook Model and FRAM reports).

Users of the PSC Chinook Model report that the raw data dump technique has been an effective method of creating reports and analyzing output data. My thought is that data communication within and between programs may be improved by creating a Data Communication Standard. Communicating salmon model data between programs and users is similar to communicating navigation data between different types of navigation devices produced by different manufacturers. The National Marine Electronics Associated (NMEA) adopted a voluntary data interfacing standard (called NMEA 0183) to solve this problem. The NMEA 0183 standard insures that a vessel owner can purchase different brands and types of navigation devices (e.g., Furuno Loran C, Northstar GPS, Simrad Depth Sounder, Wood Freeman Autopilot) and be confident that these devices can communicate with each other.

I propose a similar type of data communication standard for salmon models. The sections below outline the idea. Check it out and let me know if you have any suggestions or comments. We plan to try a test implementation of this system during the next month.


The NMEA 0183 Standard.

The basic idea behind the NMEA 0183 standard is to communicate data via standardized "sentences." A data sentence is composed of data fields, the first of which is a sentence identifier. The number and type of data fields is standardized for each type of sentence. For example, all "GPGLL" sentences have four fields (latitude, North or South, longitude, East or West) in addition to the sentence ID and look something like this:

GPGLL,4806.78,N,12245.34,W

Data sentences are streamed from one device to another at periodic intervals. Each device can stream different sentences at different intervals. Receiving devices read each sentence, look at the ID, and then either discard or use the sentence.


Proposed Salmon Model Data Standard.

The basic idea is to stream standardized data sentences from a model to a text file. Then anyone can write a program using any type of software and hardware to read the data stream, extract the data of interest, and use that data as desired (e.g., further analysis or formatted report). The advantage of using standardized data sentences is that users can export any data in any order and at any interval. The disadvantage of this system is that there may be considerable duplication of information in each sentence, thus making for a large output file.


Sample Sentences.

The following sample sentences save data at the lowest possible level.

----- CABN sentence for cohort abundance -----

Field   Data         Sample
1       Sentence ID  CABN
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       StartAbn     378947.1438 (a double)

[Note: Each cohort will have a unique identifier code composed of the parent stock abbreviation, the brood year (using all four characters), and a string of single character codes representing each remaining cohort property in a fixed order (e.g., W = wild, I = immature, U = unmarked, U = untagged, etc.)]

----- NMRT sentence for natural mortality -----

Field   Data         Sample
1       Sentence ID  NMRT
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       Mortality    25.283949 (a double)

----- FMRT sentence for fishing mortality -----

Field   Data         Sample
1       Sentence ID  FMRT
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       Fishery      2 (or fishery abbreviation)
6       CohortID     URB1986WIUUU
7       LegalCat     345.192837 (a double)
8       ShakerMort   43.1288393 (a double)
9       CNRMort      21.1093994 (a double)

----- CMIG sentence for cohort migration -----

Field   Data         Sample
1       Sentence ID  CMIG
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       FromRegion   4 (or region abbreviation)
5       ToRegion     6 (or region abbreviation)
6       NumMigrants  25.49304958 (a double)

If space is a problem in the output file, we could create other sentences that aggregate data. For example:

----- SABN sentence for stock abundance in a region -----

Field   Data         Sample
1       Sentence ID  SABN
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       StockID      URB
6       StartAbn     378947.1438 (a double)

------ CLHD sentence for cohort life history data -----

Field   Data         Sample
1       Sentence ID  CLHD
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       StartAbn     28394.1928 (a double)
7       NatMort      28.3941928 (a double)
8       TotCat       3564.19283 (a double)
9       TotShakers   32.9238287 (a double)
10      TotCNR       27.3984749 (a double)
11      TotOutMig    283.193993 (a double)
12      TotInMig     11.9384773 (a double)
13      EndAbn       ????.????? (a double)

[Note: We could put all data (i.e., morts for each fishery) in a single record if we included a field for the number of fisheries harvesting the cohort in the given TS and Region. That would allow this type of sentence to have a variable number of data fields.]


Generating The Data File.

At the end of each process (e.g., MortalityPorcess, MigrationProcess) during a timestep an OutputGenerator object will have the opportunity to stream data to a data file. The type and frequency of data streaming to the data file is configed by the user at startup. To save space in the data file, null data (e.g., zero catches, zero natural morts, zero migrations) will not be streamed.


Using The Data File.

The data reading program will store data in tables, matrices, or some other structures. These structures will be created on startup, and all entries will be set to zero. Since only non-zero data are streamed to the data file, the reading program will just fill in the non-zero entries.

The advantage of using standardized data sentences is that users of the data file need only know the sentence formats in order to use the data. Any type of computer and language can be used to create a program to read each sentence and then determine what to do with it.

------------ End Memo -----------

The code structure to export data from the Coast model using this type of data standard is in place. Troy has developed a DataRequestManager that can store data at the end of each major process in each timestep. A sample of data output is below:

CABN, 1994, 1, 1, WCN1992XIXXXX, 249199
CABN, 1994, 1, 1, WCN1993XIXXXX, 432492
CABN, 1994, 1, 1, LYF1989XIXXXX, 0.92099
CABN, 1994, 1, 1, LYF1990XIXXXX, 3819.65
CABN, 1994, 1, 1, LYF1991XIXXXX, 1297
CABN, 1994, 1, 1, LYF1992XIXXXX, 6913.63
CABN, 1994, 1, 1, LYF1993XIXXXX, 16278.3
CABN, 1994, 1, 1, MCB1989XIXXXX, 14337.9
CABN, 1994, 1, 1, MCB1990XIXXXX, 43179.7
CABN, 1994, 1, 1, MCB1991XIXXXX, 278504
CABN, 1994, 1, 1, MCB1992XIXXXX, 396091
CABN, 1994, 1, 1, MCB1993XIXXXX, 764184
FMRT, 1994, 1, 1, 1, AKS1989XIXXXX, 208.646, 0.0195405, 23.8002
FMRT, 1994, 1, 1, 1, AKS1990XIXXXX, 760.973, 0.287332, 87.045
FMRT, 1994, 1, 1, 1, AKS1991XIXXXX, 495.609, 7.4111, 64.7623
FMRT, 1994, 1, 1, 1, NTH1989XIXXXX, 3639.1, 5.43337, 420.799
FMRT, 1994, 1, 1, 1, NTH1990XIXXXX, 5415.74, 89.9963, 717.756
FMRT, 1994, 1, 1, 1, NTH1991XIXXXX, 8930.04, 3259.91, 4660.01
FMRT, 1994, 1, 1, 1, NTH1992XIXXXX, 3408.08, 5434.2, 6460.03
FMRT, 1994, 1, 1, 1, FRE1990XIXXXX, 2249.31, 19.2238, 277.821
FMRT, 1994, 1, 1, 1, FRE1991XIXXXX, 13049.8, 2219.4, 3966.96
FMRT, 1994, 1, 1, 1, FRE1992XIXXXX, 567.835, 2936.55, 3345.72
FMRT, 1994, 1, 1, 1, FRL1990XIXXXX, 420.461, 4.30132, 52.7235

Jim reported that he wrote a Visual Basic utility to read through this output file and create *cat.prn and *esc.prn files, each of which then validated with those produced by the PSC Chinook Model. The size of the output file was about 2.5 MB and the utility required about 25 sec to generate the formatted output files (using a Pentium 133 machine).

There was some discussion about this idea. In general, the meeting group felt this was a good idea, but that it needed to be fine-tuned to meet the needs of each type of model configuration.

Jim emphasized the need to develop standardized stock definitions. He said that some stocks in the FRAM Model have the same names as stocks in the PSC Chinook Model, but used different run aggregations (eg separating Hood Canal runs from South Puget Sound runs) and different CWT data for estimating harvest related parameters. Marianne suggested passing the stock standardization idea along to the PSMFC.


7. Report on PSC CTC project with ESSA and UW.

The PSC has contracted with ESSA of Vancouver, BC to make improvements in the PSC Chinook Model. UW is a subcontractor to ESSA. Currently there are three closely related models:

-- PSC Chinook Model (CTC versions 95, 98, and 99);
-- Crisp Harvest (C++ version of CTC 95);
-- Coast Model (New C++ version of CTC 95).

The PSC Chinook Model is written in QuickBasic and is the model currently used by the Chinook Technical Committee (CTC). Crisp Harvest is a C++ version of the 1995 PSC Chinook Model, and includes an extensive Graphical User Interface (GUI). The "Coast Model" is the code framework developed by this NMFS project during the past 2+ years, and was derived from Crisp Harvest. The Coast Model does not yet have a GUI.

The work plan for this project calls for UW to update the Coast Model to incorporate all the current algorithms used by the PSC Chinook Model (i.e., age specific FPs, Spring chinook aging algorithm, stock specific incidental mortality rates). The new "abundance-based management algorithm" will not be included in this development. ESSA will develop a calibration algorithm, database utilities, and a limited GUI. The basic idea is to develop a CTC version of the Coast Model in parallel with the PSC Chinook Model until all validation checks have been completed and real-time management can be transferred to the Coast Model. The work plan is illustrated below:


  CTC 95  ---------- CTC 98  ----------- CTC 99
   |                   |                   |
   |                   |                   |
   |                   |                   |
   |                   |                   |
Crisp Harvest      Validate            Validate
   |                   |                   |
   |                   |                   |
   |                   |                   |
   v                   v                   v
Coast 98/95 ------ Coast 98/98 ------- Coast 99


        UW:   Process algorithms
        ESSA: Calibration algorithm
              Database utilities
              Limited GUI


8. Next meeting.

The next model committee meeting was tentatively scheduled for Tuesday March 16, 1999 (location to be determined). Please advise if this date is not acceptable.

28. Proposed Data Communication Standard by Jim Norris, 12/03/98

Draft

Proposed Salmon Model Data Communication Standard

by

Jim Norris


Background.

Many computer programs and models are used by Pacific salmon researchers and managers. There is a need to communicate information between programs and from individual programs to users. Currently (Nov 1998), there are three general data communication methods:

(1) raw data from one program is used by other programs for further analysis (e.g., FRAM to TAMM communication);

(2) raw data from one program is used by auxiliary programs to create formatted reports (e.g., PSC Chinook Model raw data dump); or

(3) each program creates model specific formatted reports (e.g., PSC Chinook Model and FRAM reports).

Users of the PSC Chinook Model report that the raw data dump technique has been an effective method of creating reports and analyzing output data. My thought is that data communication within and between programs may be improved by creating a Data Communication Standard. Communicating salmon model data between programs and users is similar to communicating navigation data between different types of navigation devices produced by different manufacturers. The National Marine Electronics Associated (NMEA) adopted a voluntary data interfacing standard (called NMEA 0183) to solve this problem. The NMEA 0183 standard insures that a vessel owner can purchase different brands and types of navigation devices (e.g., Furuno Loran C, Northstar GPS, Simrad Depth Sounder, Wood Freeman Autopilot) and be confident that these devices can communicate with each other.

I propose a similar type of data communication standard for salmon models. The sections below outline the idea. Check it out and let me know if you have any suggestions or comments. We plan to try a test implementation of this system during the next month.


The NMEA 0183 Standard.

The basic idea behind the NMEA 0183 standard is to communicate data via standardized "sentences." A data sentence is composed of data fields, the first of which is a sentence identifier. The number and type of data fields is standardized for each type of sentence. For example, all "GPGLL" sentences have four fields (latitude, North or South, longitude, East or West) in addition to the sentence ID and look something like this:

GPGLL,4806.78,N,12245.34,W

Data sentences are streamed from one device to another at periodic intervals. Each device can stream different sentences at different intervals. Receiving devices read each sentence, look at the ID, and then either discard or use the sentence.


Proposed Salmon Model Data Standard.

The basic idea is to stream standardized data sentences from a model to a text file. Then anyone can write a program using any type of software and hardware to read the data stream, extract the data of interest, and use that data as desired (e.g., further analysis or formatted report). The advantage of using standardized data sentences is that users can export any data in any order and at any interval. The disadvantage of this system is that there may be considerable duplication of information in each sentence, thus making for a large output file.


Sample Sentences.

The following sample sentences save data at the lowest possible level.

----- CABN sentence for cohort abundance -----

Field   Data         Sample
1       Sentence ID  CABN
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       StartAbn     378947.1438 (a double)

[Note: Each cohort will have a unique identifier code composed of the parent stock abbreviation, the brood year (using all four characters), and a string of single character codes representing each remaining cohort property in a fixed order (e.g., W = wild, I = immature, U = unmarked, U = untagged, etc.)]

----- NMRT sentence for natural mortality -----

Field   Data         Sample
1       Sentence ID  NMRT
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       Mortality    25.283949 (a double)

----- FMRT sentence for fishing mortality -----

Field   Data         Sample
1       Sentence ID  FMRT
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       Fishery      2 (or fishery abbreviation)
6       CohortID     URB1986WIUUU
7       LegalCat     345.192837 (a double)
8       ShakerMort   43.1288393 (a double)
9       CNRMort      21.1093994 (a double)

----- CMIG sentence for cohort migration -----

Field   Data         Sample
1       Sentence ID  CMIG
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       FromRegion   4 (or region abbreviation)
5       ToRegion     6 (or region abbreviation)
6       NumMigrants  25.49304958 (a double)

If space is a problem in the output file, we could create other sentences that aggregate data. For example:

----- SABN sentence for stock abundance in a region -----

Field   Data         Sample
1       Sentence ID  SABN
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       StockID      URB
6       StartAbn     378947.1438 (a double)

------ CLHD sentence for cohort life history data -----

Field   Data         Sample
1       Sentence ID  CLHD
2       Year         1985
3       TimeStep     3 (or TimeStep abbreviation)
4       Region       4 (or region abbreviation)
5       CohortID     URB1986WIUUU
6       StartAbn     28394.1928 (a double)
7       NatMort      28.3941928 (a double)
8       TotCat       3564.19283 (a double)
9       TotShakers   32.9238287 (a double)
10      TotCNR       27.3984749 (a double)
11      TotOutMig    283.193993 (a double)
12      TotInMig     11.9384773 (a double)
13      EndAbn       ????.????? (a double)

[Note: We could put all data (i.e., morts for each fishery) in a single record if we included a field for the number of fisheries harvesting the cohort in the given TS and Region. That would allow this type of sentence to have a variable number of data fields.]


Generating The Data File.

At the end of each process (e.g., MortalityPorcess, MigrationProcess) during a timestep an OutputGenerator object will have the opportunity to stream data to a data file. The type and frequency of data streaming to the data file is configed by the user at startup. To save space in the data file, null data (e.g., zero catches, zero natural morts, zero migrations) will not be streamed.


Using The Data File.

The data reading program will store data in tables, matrices, or some other structures. These structures will be created on startup, and all entries will be set to zero. Since only non-zero data are streamed to the data file, the reading program will just fill in the non-zero entries.

The advantage of using standardized data sentences is that users of the data file need only know the sentence formats in order to use the data. Any type of computer and language can be used to create a program to read each sentence and then determine what to do with it.

28.1. Standard formats for data interchange by Tom Wainwright, 12/11/98

Jim/Troy,
What you've proposed looks fairly complete and workable.  My only suggestion is that you look at some of the interagency standard formats for web data exchange (particularly netCDF--
    http://www.unidata.ucar.edu/packages/netcdf/
which is used widely for oceanographic and environmental data).  The advantage of following these standards is that there is often much free software/code libraries available for manipulating the data outside the application that generates it, thus making it easier for others to use the data.

--Tom

27. Oct. 6, 1998 Meeting Minutes by Jim Norris, 10/21/98

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject:  Minutes of the October 6, 1998 meeting


Contents:

1. Attendance list.
2. Algorithm problems associated with flexible timesteps and regions  (Jim Norris, Troy Frever).
3. Prototype model code update (Troy Frever).
4. State Space Model update (Ken Newman, Alan Hicks).
5. Model comparisons update (Jim Norris).
6. September Pacific Fishery Management Council meeting report (Robert Kope).
7. Ecosystem/Fisheries Symposium report (Jim Norris, Ken Newman, Norma Jean Sands).
8. Next meeting.


1. Attendance List.

Norma Jean Sands (ADFG)
Jim Anderson (UW)
Troy Frever (UW)
Jim Norris (UW)
Robert Kope (NMFS NWFSC)
Martin Liermann (NMFS NWFSC)
Bob Conrad (NWIFC)
Ken Newman (UI)
Allan Hicks (UI)
Marianne McClure (CRITFC)
Gary Morishima (QMC)
Steve Lindley (NMFS SWFSC)


2. Algorithm problems associated with flexible timesteps and regions  (Jim Norris, Troy Frever).

Jim and Troy reported that replacing the space/time concepts of "Preterminal" and "Terminal" in the PSC Chinook Model with flexible timesteps and regions requires new changes to the following three algorithms, in addition to the many other modifications discussed in previous meetings.

Catch Ceiling Algorithm. Currently, the input catch ceilings for each ceilinged fishery (in the *.cei file) are given for each year (i.e., over all timesteps and regions). These input values are then used to compute scalars that relate catches prior to the 1985 treaty to catches after 1985:

                     input ceiling(y,f)
  scalar(y,f) = -----------------------------------
                SUM(input ceilings(y = 79 to 84,f))

During forward simulation, the program sums the Preterminal and Terminal model predicted catches for each fishery during the years 1979-1984 (called the "base period"). The Preterminal and Terminal catch ceilings for years beyond 1984 are computed by multiplying the scalar(y,f) by the appropriate sum for the 1979-1984 period. That is,

  ceiling(y,f,phase) = scalar(y,f)*SUM(model catch(y = 79 to 84,f,r)

where phase is either Preterminal or Terminal.

The problem with the above algorithm is that Preterminal and Terminal do not specifically refer to time or space. We need an algorithm that will compute catch ceilings by timestep and region for each fishery.

To generalize this process to multiple timesteps and regions, the scalar(y,f) values can be computed as before. In fact, the scalars can be computed outside the simulation program and then input directly. During forward simulation, the program can sum the model predicted catches for each fishery during the years 1979-1984 by timestep and region, instead of by Preterminal or Terminal phase. The catch ceilings for years beyond 1984 can then be computed by multiplying the scalar(y,f) by the appropriate sum for the 1979-1984 period. That is,

  ceiling(y,f,t,r) = scalar(y,f)*SUM(model catch(y = 79 to 84,f,t,r)

where t = timestep and r = region.


Adult Equivalent Factors In Production Functions. The production functions in the PSC Chinook Model relate spawners to ADULT recruitment. Adult recruitment is converted back to "Age1Fish" by a factor called "RectAtAge1". In essence, the RectAtAge1 factor defines the relationship between adult recruitment and Age1Fish in the equilibrium condition with no harvesting. Thus, the RectAtAge1 factor is defined by the maturation and natural mortality schedules for each stock. For the new model, three situations need to be considered.

(1) If the maturation and natural mortality schedules do not change over time and space, each stock has a unique RectAtAge1 factor. About 2/3 of the stocks in the PSC Chinook Model fall in this category. Note that the RectAtAge1 factors could be computed before a model run and passed in as input data along with other production function parameters.

(2) If the schedules do change over time and space, then each stock will have a different RectAtAge1 factor for each year. About 1/3 of the stocks in the PSC Chinook model fall in this category. These stocks have constant natural mortality schedules, but variable maturation schedules. At the start of each year, a new RectAtAge1 factor is computed for each stock. As with case (1), since all schedules are included with the input data, the RectAtAge1 factors could be computed before a model run and passed in as input data along with other production function parameters.

(3) Cases (1) and (2) can be generalized to variable timesteps and regions, provided the timestep and region definitions and the maturation and natural mortality schedules for each stock are all defined during the model configuration. We envision the new model will have timesteps and regions defined during configuration, but we would like to allow for the possibility that maturation and survival schedules may be determined dynamically based on model predicted environmental conditions.

Jim and Troy proposed that for forward simulation runs RectAtAge1 factors be considered as part of the production parameters for each stock and should become part of input data. This should be the case even if natural mortality and maturation schedules are determined dynamically. In other words, the model user must have some previously estimated relationship between adult recruits and Age1Fish for each stock and year. If a model dynamically computes natural mortality and maturation schedules, the model predicted RectAtAge1 factors could be stored reported as output.


Pre-Spawning Mortality And Production Functions. The PSC Chinook Model uses a truncated Ricker function for natural stocks. The Ricker A parameter and an optimum escapement level are the input parameters. The Ricker B parameter is determined from the Ricker A parameter using Hilborn's Approximation. If there is no additional mortality after all harvesting (this is true for all but 3 stocks), the maximum escapement level at which to truncate the function is given by RickerB/RickerA.

For three Columbia River stocks, the escapement from the fisheries is adjusted by a pre-spawning survival factor (called the IDL, or "inter-dam loss", factor). For these stocks, the maximum escapement level is increased to account for pre-spawning mortality. Here's the code.

FUNCTION FRicker (ESC, Stk%, Yr%)
  A = RickerA(Stk%)
  B = Optimum(Stk%) / (.5 - .07 * A)
  FRicker = ESC * EXP(A * (1 - ESC / B))
  '..... If escapement exceeds level producing maximum recruitment,
  '..... keep recruits at maximum.  cf Ricker 1975, p. 347, eq. 10
  MaxEsc! = B / (A * IDL!(Stk%, Yr%))
  IF ESC > MaxEsc! THEN
    FRicker = B * EXP(A - 1) / A
  END IF
END Function

There was a lot of discussion about this algorithm. The main point of contention was why the maximum escapement value should be adjusted if the escapement value ("ESC") being passed into the function already had been adjusted for pre-spawning escapement. My understanding (readers please correct me if I'm wrong!) is the following.

The Ricker function parameters are estimated from observed spawners and observed recruitment to the fisheries. Thus, the parameters do not incorporate any non-fishing mortality after fishing but before spawning. Since the parameters do not incorporate pre-spawning survival, the computed maximum escapement doesn't incorporate it either. That is, the maximum escapement parameter assumes there is no pre-spawning mortality. Thus, adjusting the maximum escapement level by pre-spawning mortality is to correct the maximum escapement parameter value, not to correct the escapement passed into the function.

In the new model we treat the pre-spawning mortality as a form of non-fishing, or natural, mortality that may occur during any timestep and in any region, depending on how the user configures the natural mortality process. Thus, the new model may be configured to have pre-spawning mortality during several timesteps and regions, depending on the model resolution. For example, separate "natural" mortalities could be assessed as fish pass each dam.

To extend the concept of pre-spawning mortality to variable timesteps and regions, we need a parameter that represents natural mortality of the escapement (all age classes) from the fisheries to the spawning grounds. Currently, the stock/year specific IDL parameters serve this purpose. An analogous parameter for the generalized case is the weighted average of the IDLs for each age class in the escapement, where the weights are the relative contributions of each age class to the total escapement. The current algorithm is a special case of the wieghted average algorithm. Troy reported that he has already implemented this general structure and it validates with the old code.


3. Prototype model code update (Troy Frever).

Troy reported that the full prototype is not quite ready yet, but he is very close. At this point, the prototype code has the following new features currently implemented:

 - variable timesteps
 - variable regions
 - variable age classes
 - variable cohort characteristics (marked, tagged, etc.)
 - variable dimensionality for most data parameters
 - transition matrix migration

Work to be completed prior to prototype delivery:

 - cohort configuration options from input data
 - harvest management (policy) framework
 - simple catch ceilings
 - completion of new data input file capabilities
 - expansion of output options (possibly data dump - see below)
 - internal code cleanup

Remaining work after prototype delivery (next contract period):
 - addition of other harvest processes (selective fishery, etc.)
 - different shaker process?
 - addition of other management processes (escapement goals, etc.)
 - speed optimization
 - documentation

Gary Morishima suggested that we have one large output file (in text format) from which other stand alone programs (e.g., Excel, Access) can create formatted reports. He said this has worked well for the PSC Chinook Model.


4. State Space Model update (Ken Newman, Alan Hicks).

Ken described a new migration module he has implemented in the State Space Model. A complete description of the new module can be obtained at:

/harvest/newman/migmod3.pdf

Briefly, the model assumes that the probability distribution for the location at time t+1 of a fish at location p(t) at time t is a Beta distribution with parameters alpha(p(t),t) and beta(t). The alpha parameter is a function of time and location while the beta parameter depends only on time.

The expected value of the next location, mu, is a product of the current location and a multiplier <= 1.0 (a logit function), that early in the time period (i.e., t near 0) is made arbitrarily near to 1.0 by choosing an appropriate parameter in the logit function (see the reference doc for details). As time increases, the multiplier shrinks towards zero, which is the location on the line of the natal area. Movement beyond the natal area, whether north or south of the natal area, is not allowed, assuming the freshwater influence will attract the fish back to spawn. Nor is movement outside the line segment (to the far north or south) allowed--thus the system is closed.

Ken reported the following results from the new module (relative to the old module) for the six years of Humptulips data he has been working with:

-- No change in the initial survival estimates;
-- The initial distributions are now very stable (i.e., consistent across the six years). The previous module had a very odd initial distribution for the 1988 data;
-- Fishing effort parameters still don't show a consistent pattern between US and Canada;
-- The overall goodness of fit improved by up to 40%.

Robert Kope asked if Ken had tried fitting the model using a common catchability coefficient for the US and Canada harvest processes. He had not.

Jim Anderson suggested that Ken might try a more traditional migration model and gave three examples that Rich Zabel (UW Columbia Basin Research) has been working with.

Alan Hicks (UI graduate student) presented some preliminary results showing correlations between upwelling index (lagged to ocean entry year) and survival to the first week of the SSM.


5. Model comparisons update (Jim Norris).

Jim reported that he added a second FRAM model configuration to the model comparisons program to (1) demonstrate the flexibility of the underlying code structure and (2) validate two different migration process algorithms. After a Base Model is run to generate synthetic catch and escapement data, the FRAMAnalyst object estimates the FRAM parameters (harvest rates and maturation rates). The FRAMAnalyst then creates two independent models from the same parameter estimates.

The first model (FRAM1) configures a model in which each cohort has a number of immature schools with equal initial abundance and are initially distributed uniformly over a narrow range in the Preterminal region. Throughout the forward simulation, the immature cohorts have no migration (i.e., their schools never change location). However, at appropriate times during the simulation, each immature cohort goes through a maturation process that creates a new mature cohort with the same number of schools as the associated immature cohort. The maturation rate determines the fraction of fish from each immature school that is transferred to its associated mature school. The migration process for the mature cohort moves its schools through the Terminal region and, after suffering fishing mortality, on to the Spawning region.

The second model (FRAM2) configures a model in which each cohort has only three schools--one for each region (Preterminal, Terminal, and Spawning). Thus, the schools can be thought of as the regional abundances of a cohort (i.e., the abundance vector in the State Space Model). The total initial abundance of the cohort is given to the school located in the Preterminal region; the initial abundances of the schools located in the Terminal and Spawning regions are zero. The migration process for this model uses the transition matrix approach used in the State Space Model. That is, fish from a given cohort are transferred among its schools based on the transition matrix for that cohort during that timestep. The elements of the transition matrix are determined by the maturation rates.

Thus, FRAM1 and FRAM2 utilize the same estimated parameters to configure models with radically different configurations. Theoretically, both models should produce the same results if given the same FRAM parameter estimates.

Jim showed preliminary results of how well each model fit the synthetic data. A Base Model was run in deterministic mode using the "Constant" migration pattern and "Base" harvest plans described at the last meeting. The resulting synthetic catch and escapement data were aggregated over 30 day time periods, FRAM parameters were estimated, and FRAM1 and FRAM2 models were configured. The Base and FRAM models were then run using all combinations of two migration patterns ("Constant" and "Increasing") and three harvest plans ("Base", "Low Ocean", and "Selective"). For the "Low Ocean" harvest plan, the models reduced the harvest rates for the Ocean Troll fishery to 15% of the "Base" plan. For the "Selective" harvest plan, the models reduced the harvest rates for the Unmarked Cohort in the Ocean Troll and all sport fisheries to 15% of the "Base" plan.

For each of the six comparisons, the square root of the sum of squared differences between the Base model catches for each fishery and the corresponding predicted catches from each FRAM model were computed (labeled RMSE in the table below). Jim distributed printed results from each run; the RMSE results are below:

Config  Migration    Harvest Plan   RMSE FRAM1    RMSE FRAM2

  1     Constant     Base               19            14
  2     Constant     Low Ocean          58            58
  3     Constant     Selective          70            69
  4     Increasing   Base              192           195
  5     Increasing   Low Ocean         234           233
  6     Increasing   Selective         203           204

The results from the two FRAM models were nearly identical, and, thus,  the code algorithms were assumed to be working properly. Also, both FRAM models accurately predicted the configuration from which the FRAM parameters were estimated (Config 1).

In Config 2, most of the FRAM error was caused by underestimating the South Sound Net fishery catches. Apparently the FRAM models did not accurately simulate the movement of increased abundance (due to low ocean harvest) into Puget Sound.

For Config 3, the FRAM models predicted the Marked Cohort catches very accurately, but underestimated the Unmarked Cohort catches in the South Sound Net fishery.

Configs 4 - 6 simulated a change in migration pattern from the base year to the predicted year. For the "Increasing" migration pattern the fish start moving about a month earlier (although at a very slow rate) and the rate increases as time progresses. This change in migration pattern increased the FRAM errors. For each of these configurations the main source of error was in overestimating the South Sound Net catches.

Jim reported that he had most of the code running to include the SSM in the analysis, but the code had an obvious bug that he was unable to correct before the meeting.


6. September Pacific Fishery Management Council meeting report (Robert Kope).

Robert reported that the September PFMC meeting did not have any significant topics for this project.


7. Ecosystem/Fisheries Symposium report (Jim Norris, Ken Newman, Norma Jean Sands).

Jim reported that the plenary speakers emphasized how difficult it is (and will be) to bring together all types of ecological information (not just computer models) into the decision-making process. For example: How do you include knowledge of indigenous people with computer models?

There were several papers using the ECOPATH model. This seems to be a popular model for complete ecosystems. The results (predictions) are still pretty general (e.g., removing species A will increase some species and decrease others). There also was a presentation using the Facet Decision System.

Several papers noted the importance of salmon carcasses in bringing marine elements into the terrestrial system. Main conclusion--rotting fish are more valuable than you might think.

Milo Adkison did a pretty detailed analysis of 32 possible models to predict Bristol Bay salmon runs. The environmental factors he considered were of relatively little value compared to "sibling" information (i.e., number of age 2, 3, and 4 fish in previous years). His "best" model fit all of the past 40 years well ... even the big production jump during the regime shift in the late 70s. However, the predictor completely missed the last 3 years ... over-estimated by about 2 times. Main conclusion--even 40 years of good data and good model fits aren't enough to predict some changes. As Stuart Pimm noted in his plenary remarks: "Things go bump in the night!"

There was a special evening session to demonstrate models, including the Crisp Harvest Model and NerkaSim. During the discussion period some members of the audience wanted to know why we didn't have an optimization routine in Crisp Harvest to search for one or more harvest plans that would meet all the management constraints (e.g., escapement goals, allocation goals, etc.). They were pretty shocked to learn that the stakeholders preferred to do such a search in the political arena where they could hope to get a better deal.

Ken Newman reported that he found Randall Peterman's paper on sources of uncertainty in marine fish management very interesting.


8. Next meeting.

No date was selected for the next meeting due to uncertainty about a PSC Chinook Technical Committee workshop tentatively scheduled for November 16-20. I recommend that we schedule the next model committee meeting for Friday December 11 at NMFS Montlake. Please advise ASAP if this date is not acceptable.

26. Aug. 27, 1998 Meeting Minutes by Jim Norris, 10/21/98

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject:  Minutes of the August 27, 1998 meeting


Contents:

1. Attendance list.
2. Brief review of July 30, 1998 meeting (Jim Norris).
3. Update on code development (Troy Frever).
4. Update on effort data collection (Jim Scott).
5. Update on State Space Model (SSM) work (Ken Newman via email).
6. Update on model comparisons program (Jim Norris).
7. Next meeting.


1. Attendance List.

Norma Jean Sands (ADFG)
Cara Campbell (NMFS NWFSC)
Troy Frever (UW)
Jim Norris (UW)
Jim Scott (NMFS NWFSC)
Robert Kope (NMFS NWFSC)
Martin Liermann (NMFS NWFSC)
Bob Conrad (NWIFC)
Marianna Alexandersdottir (NWIFC)
Din Chen (CDFO)
Tom Wainwright (NMFS NWFSC)


2. Brief review of July 30, 1998 meeting (Jim Norris).

Jim noted that no minutes were prepared immediately following the July 30, 1998 meeting to allow him more time to work on model comparisons. Complete minutes for that meeting will be prepared during September. To bring current attendees up-to-date, Jim read a July 31, 1998 email message he prepared for Troy Frever. This letter is summarized below.

----------- from Norris email to Frever (7/31/98) -------------

Instantaneous Rate Equations.

Using instantaneous rate equations to compute mortalities links natural mortality and fishing mortality (since both sources are essentially competing for the same fish). The code fix I proposed (i.e. combining NatMortMgr and HarvestMgr into a single TotalMortMgr) was acceptable to everyone. This will allow us to use instantaneous rate equations, or the nat morts and fish morts can be computed in sequence as we currently have when instantaneous rate equations are not used.

We discussed the potential infinite solution problem when solving for catch ceilings using inst rate equations. No one was concerned ... we just have to be careful setting up the problem. Jim Scott would like me to explore getting such an algorithm in place (ie solve for single fishery catch ceiling during a given timestep and region using inst rate equations). This shouldn't be too hard, but could take a little time. They want me to focus on the model comparisons the next four weeks, so this algorithm will have to slide for a while.

Discussion Paper Questions.

On the other issues (questions) I raised in my discussion paper (posted on the web site discussion page), we agreed to the following:

Q1. Using a common catchability coefficient for each stock is biologically acceptable, desireable, and feasible. Ken thinks he can solve for more than one stock at the same time.

Q2. Any other adjustments to fishing mortalities (e.g., due to bag limits, size limits, drop-offs, etc) can be incorporated into the instantaneous catch equations used by the SSM. However, they cannot be estimated by the SSM ... they must come from outside the SSM (e.g., from the accepted values used by other models, such as the Lawson and Sampson equations used in PM model). The key issue for our coding is that we will have more parameters to deal with, but we knew we would have to deal with them anyway. In many cases these will be fixed parameters that will be constant across stocks, fisheries (or gear types), regions, and timesteps (e.g., drop off rates, mark recognition rates, shaker mort rates). The bigger issue is for parameter estimation. If those additional parameters will be used with inst rate equations, then they must be input as parameters into the equations during the estimation process. This means that Ken will need more than just catches and efforts to do his estimation. Someone (probably Jim Scott and/or Robert Kope) will have to get these parameters together for Ken before he can go into production mode for parameter estimation. Jim Scott reported that he still doesn't have all the effort data yet, so this parameter estimation process is getting even more complicated and time consuming.

Q3. Incidental morts will have to be handled like the other mort adjustments discussed above in Q2. Again, these will have to be provided to Ken before he can do the estimations.

Q4. No problem converting q values estimated using inst rate equations in the SSM to a discrete model case. Jim Scott gave me a paper to read about the meaning of "catchability coefficient." This paper deals with exactly the types of questions you were asking me about the definition of q at our last meeting. I'm about half way through and will give you an update next week.

SSM Estimates Using Synthetic Data.

Before leaving last week, I gave Ken some synthetic data to try to fit. He got the q values pretty well, but really missed the intitial distribution and migration during the first few time steps. My "true" state of nature had the stock spread out over the entire range, no movement until day 100, then constant movement after day 100 (i.e., step size fixed at 7 units per day toward the natal stream). I set up the efforts to match the efforts for 1990 (i.e., ocean troll fishery in early timesteps, very concentrated net fisheries in Puget Sound later, and very low impact sport fisheries throughout the year). This set-up gave model catches very similar to CWT recoveries observed for 1990 (for Voights Creek stock).

The SSM estimated that the entire stock was located in the north ocean at the start of the simulation, and then distributed itself over the range during the first few time steps. It got the catches pretty close.

This is a little discouraging because the problem seems to be related to the fact that the recovery information comes first from the ocean fisheries and then from the inside net fisheries. In other words, the effort is not distributed evenly over all the time steps--the fisheries operate only where the fish are. Thus, at the start of the year there is no significant fishery in the inside waters to provide info that there are some fish there. Thus, the SSM just assumes they are all located in the north ocean. If a new fishing pattern is established for a future year, then the SSM model of the fish distribution will still be way off and will result in very incorrect catches.

We set up 12 different types of conditions (2 migration patterns x 2 initial distributions x 3 fishing patterns) for me to test all the models, not just the SSM. I promised to try to get this done by the next meeting.

----------- end Norris email to Frever -------------------


3. Update on code development (Troy Frever).

Troy reported that he has ported the C++ code from his Unix system to his new PC. This took more time than expected due to slight differences in the C++ compilers. He still hopes to have a prototype model in PSC Chinook Model configuration by the end of October.


4. Update on effort data collection (Jim Scott).

Jim reported that he provided Ken with the ocean troll effort data. Ken checked it over and, with some minor exceptions, was fairly close to the data he used for his dissertation work on the Humptulips stock. Jim's next step is to finish getting the "inside" sport and net effort data together.


5. Update on State Space Model (SSM) work (Ken Newman).

Ken could not attend the meeting, but provided the following email report on estimating parameters for the Humptulips stock using the additional data supplied by Jim Scott.

-------------------- Ken Newman email report ---------------------------

Thanks to Jim Scott getting the effort data for 86 to 91 I've now got estimates of the SSM parameters for 6 years for the Humptulips recoveries.

The model structure:

   Initial dist'n = Beta(Init_alpha, 2.0) on Brookings to North. BC (12 regions)

   Mortality = Survival from time of release to beginning of fishing
               q_US
               q_Canada

   Movement  = Beta(Move_alpha, 3.0) "advection-diffusion"

Notes:
 1. I've added northern BC back in since Allan Hicks found that
    there were a fair number of recoveries in that region for some
    of the years (especially 1988).
 2. still using different q's for US and Canada even though Jim
    Scott got effort "standardized" to boat days; I've still got
    a "scaling" factor for Canada effort twice that of US
 3. natural mort during season = 0

-----------------------------------------------------------------------
Resulting estimates:
       Initial  Surv     q_US    q_Can   Move
        alpha  to start                  alpha   -Log likelihood

1986:   2.96    5.95     6.51    4.93    8.58     5675.4
1987:   3.14    1.51     3.51    4.44    7.46      573.9
1988: 281.54    6.58    18.16    2.98   10.06     3593.6
1989:   2.94    4.40    11.97   11.82    7.36     2658.1
1990:   2.36    2.83     9.78    4.71    9.44     2462.3
1991:   2.55    6.26     8.79    4.35    9.51     3175.6

Except for 1988 the Initial_alpha parameter is varying between 2.4 and 3.1- not too wild.

Initial survivals ranging from 1.5% to 6.6%.

Given the different scaling, one could double q_Canada to put things on the same scale:

      q_US   q_Canada
1986  6.51   9.86
1987  3.51   8.88
1988 18.16   5.96 
1989 11.97  23.64
1990  9.78   9.42
1991  8.79   8.70

-----------------------------------------------------------------------

1988 stands out as a "strange" year in many ways:

 1. the initial distribution is shoved way north
 2. the only year with "less" effective gear, assuming boat days
    are truly equivalent.
 3. highest initial survival

Just looking at the recovery data for that year, there are a relatively high number of late season recoveries in the far north regions.  This is what is most affecting the results I believe.  To account for these catches a sizeable portion of the population must be present that far away that late- one way of "forcing" this is to shove a lot of fish way north at first.  It could be that fixing the 2nd parameter = 2.0 in the initial dist'n is causing this.  Allowing this parameter to be estimated may smooth things out.

I think it's important to try out different initial dist'n and movement modules though before drawing too many conclusions.

---------------- end Ken Newman email report ----------------

There was a lengthy discussion of Ken's results which focused on the following four general items.

Item 1. Some members were wondering how correlated the q-values were with the initial alpha values. Jim Scott recalled that Ken previously reported that the covariance matrix didn't indicate much correlation. Some members wondered how much trouble it would be to generate the response surface for these two parameters given the others fixed.

Item 2. The variability in the q-values seemed pretty high. Jim Norris suggested it might be possible to assume constant q's across years and then fit a model to all six years.

Item 3. Two thoughts on evaluating the predictive capabilities of the SSM. First, select one of the six years as a base year, estimate parameters, then predict each of the five other years. Second, select five of the six years as base years, estimate "average" parameters for those five years, then predict the remaining year. This is similar to what Rich Comstock did with the PM Model. Also, Jim Norris suggested that we need to get Ken the inside (i.e., non-ocean) effort data so he can try the SSM on the Voights Creek stock. This will provide an opportunity to work with a stock that has a more unidirectional movement pattern and a more complicated initial distribution.

Item 4. Most members would like to have some way of relating the "move alpha" parameter to an intuitive description of the migration process. Perhaps something like a plot of "average" daily step size for each day (given the estimated initial distribution). This would help identify any unrealistic predictions of the migration model.


6. Update on model comparisons program (Jim Norris).

Overview. Jim reported that he generated 16 synthetic datasets and supplied them to Ken for SSM parameter estimation. When Ken attempted to use the data, he discovered an error in Jim's data aggregation routine. Jim fixed the error and generated new datasets, but the SSM estimates could not be completed prior to this meeting. All datasets were generated in deterministic mode (i.e., no variability for any model processes). The 16 datasets included all combinations of four "treatments" and two "levels" per treatment:

Migration Pattern
  -- constant daily rate (13 units/day starting on Aug 12);
  -- increasing daily rate (0 units/day until Jun 28, then increasing linearly from 0 to 10.5 units/day by Sep 11).

Harvest Plan
  -- Base (similar to 1986); 
  -- Low Ocean (Ocean Troll effort = 15% of Base).

Data Aggregation By Time
  -- 30 days per aggregation period;
  --  7 days per aggregation period.

Data Aggregation By Space
  -- 15 statistical regions;
  --  6 region groups (RG):
        RG1 = WCVI 25-27 (North WCVI)
        RG2 = WCVI 21-24 (South WCVI)
        RG3 = WA 4B - WA 6 (St J de F)
        RG4 = WA 6b - 9 (WA 6B9)
        RG5 = WA 10 - 11 (South Sound)
        RG6 = Puyallup River

Fishing Effort For Base Harvest Plan. For ocean troll effort, he used the data presented by Jim Scott at the July 30 meeting. For effort patterns in the net and sport fisheries, he used educated guesses based on his own fish tickets from 1986 and published regulations. The gillnet effort was concentrated in the Strait of Juan de Fuca during weeks 33-36 (August sockeye salmon fishery) and in South Puget Sound during the September coho fishery). Some sport effort occurred throughout the year, with elevated effort in the Strait of Juan de Fuca during August-September and in south Sound in September-October.

Initial Distribution. All model runs used the same initial distribution of the cohorts: normal distribution with mean = 550 (southern part of WCVI) and sigma = 150.

Effect of School Size. The model comparison program configures each cohort into individual schools that migrate independently. To test for the effects of the number of schools on the simulated catch distribution, Jim compared simulated catch distributions using 10, 25, 50, and 75 schools per cohort for several model configurations. There were some minor differences between 10 and 25 schools, but virtually no differences between 25, 50, ad 75 schools. Thus, he used 25 schools per cohort to generate synthetic data.

Base Harvest Plan Catch Distribution. For the "base" harvest plan  and the "constant" migration pattern the simulated catch distribution (25 schools/cohort) was similar to that of the 1986 CWT recoveries for the Voights Creek stock. For the "increasing" migration pattern, the J de F Net fishery catch increased and the SS Net fishery catch decreased.

                                       Model Simulation %
  Fishery       Est CWTs (  %)     Constant Mig   Increasing Mig

  Ocean Troll      2,953 (37%)          35%             35%
  J de F Net       1,465 (19%)          19%             26%
  J de F sport       548 ( 7%)           6%              7%
  6B9 Net             82 ( 1%)           0%              0%
  6B9 Sport          117 ( 2%)           0%              1%
  SS Net           2,481 (31%)          36%             27%
  SS Sport           272 ( 3%)           3%              3%

FRAM Analysis. Jim demonstrated the latest version of the model comparisons program. The program makes a "BaseModel" run using one of the 16 model configurations, estimates FRAM parameters, and then simultaneously runs the BaseModel (under ANY configuration) and FRAM (using the estimated parameters) to compare FRAM predicted catches with "true" (i.e., BaseModel) catches in a future year. A "selective" fishery plan can be used in the "future" year. The "selective" harvest plan reduces harvest rates or catchability coefficients for the Unmarked Cohort in the Ocean Troll and all Sport fisheries to 15% of the "base" harvest plan. When the BaseModel was run in deterministic mode for future years, FRAM catch predictions were very close to the "true" values for all harvest plan configurations (base, low ocean, and selective). When the BaseModel was run with variability in the Initial Distribution and Migration processes, FRAM predictions were further from the "truth." Too few runs were completed prior to the meeting to draw further conclusions.

Next Step. The next step will be to add PM model parameter estimation and to quantify goodness-of-fit to the "true" model for FRAM, PM, and SSM models during forward simulations.


7. Next meeting.

The next meeting is scheduled for Tuesday October 6 at 9:00 am at NMFS Montlake.


25. SSM Discussion Issues by Jim Norris, 7/23/98

At the July 2, 1998 meeting Jim Scott asked whether or not harvest parameter estimates from the State Space Model (SSM) would be compatible with our proposed Harvest Process and Fishing Process concepts. Before discussing compatibility, I summarize these concepts. Our purpose in rigorously defining these concepts is to clearly identify the inputs, outputs, data requirements, and functions of these critical code objects.


Harvest Process. Within a given year, timestep, region, and fishery a harvest process defines the interaction between the amount of fishing effort (i.e., number of people involved) and the number of fish from a given stock and cohort. In this context we define a "fishery" to include all regulations and properties other than the amount of fishing effort (e.g., size limits, bag limits, and selective fishery rules). A cohort is defined to be any group of fish having the same identifying characteristics and demographic features (e.g., parent stock, tag status, mark status, sex, growth group, and genetic group).

In virtually all types of fishery simulation models, there is a line of code (occasionally more than one line) that assigns a legal catch at the year, timestep, region, fishery, stock, and cohort level. In most cases, this line of code represents what we call a harvest process. Three common types of equations are the following:

1. Simple Linear Rate (used by FRAM & PSC chinook model).

   C(c,f) = HR(c,f)*N(c)

where

   C(c,f) = catch of cohort c in fishery f
   HR(c,f) = harvest rate for cohort c in fishery f
   N(c) = abundance of cohort c at start of period.

2. Non-Linear Relationship (similar to PM Model).

   C(c,f) = (1 - exp(-q(f)*E(f)))*N(c)

where

   q(c,f) = catchability coeff for cohort c in fishery f
   E(f) = effort in fishery f during period.

3. Instantaneous Rates (used by SSM).

            F(c,f)
   C(c,f) = ------ * (1 - exp(-Z(c))) * N(c)
             Z(c)

where

   F(c,f) = inst rate of fishing mort
   Z(c) = inst rate of total mortality for cohort c, and

   Z(c) = M(s) + Sum[F(c,f)]  over all f
   F(c,f) = q(f)*E(f)
   M(c) = inst rate of natural mort for cohort c.


Fishing Process. For each year, timestep, region, and fishery a fishing process defines the amount of fishing effort to be input into the harvest processes for all cohorts residing in the given time and region in order to satisfy some management objective. Note that under this formulation, a fishing process does not compute any fishing mortalities--it only determines the inputs to the harvest processes. Only harvest processes compute fishing mortalities. Note also that a fishing process applies only to a single year, timestep, region, and fishery. This is the issue I think Jim was concerned about.

In the PSC Chinook Model, non-ceilinged fisheries have a fixed harvest rate management objective. Thus, the FPs are set for each fishery at config time and are passed into each harvest process without modification. On the other hand, each simple ceilinged fishery adjusts the effort level for all harvest processes in a given year, region, and timestep by a scalar (called the RT factor) in order to make the sum of the legal catches meet the management objective.


Code Issues. Up to now we had not considered implementing instantaneous rate equations. From a code perspective, instantaneous rate equations for harvest processes are fundamentally different from the linear and non-linear equations because all the required information is not autonomous within a single fishery. Specifically, the instantaneous nat mort rate for each stock is needed, along with the instantaneous fishing mort rates for all other fisheries within the same region. Thus, to implement instantaneous rate equations, a fishery object must have access to this outside information. If two fisheries operating in the same region at the same time both have quotas, whenever effort in one fishery is adjusted to meet its quota, the Z value (total mort rate) changes for all stocks. Thus, even simple quota fisheries will have to be solved together through a common algorithm. At a more fundamental level, the natural mortality and fishing mortality processes are intertwined and must be computed simultaneously. That is, one cannot compute nat morts and then move on to computing fishing morts, unless the fishing effort levels do not require adjustment to meet some objective. For all of the above reasons, I conclude that our proposed code structure is not compatible with using instantaneous rate equations.


Code Solution. First, we must combine the Nat Mort process and the Fishing Mort process into a single Mortality process during each timestep of a model run. A model can be configured to use one of two types of Mortality process. One type can compute nat morts and fishing morts independently (as most models currently do). Or, in a second type the nat and fishing morts can be computed simultaneously using instantaneous rates. At the start of the mortality process, all instantaneous rates would have to be determined. For example, the nat mort rates could be related to the physical environment of the region and/or the average size (length) of the individual in the cohort. Likewise, the fishing mort rates could be determined by fixed effort levels, or the effort levels for some fisheries could be set dynamically via some algorithm. The key point is that somehow all the rates are established prior to any computations. Once the rates are established, the nat mort computations and fishing mort computations can be made independently. If there are any constraints within the given timestep and region (eg quotas, escapement goals, allocations), then an algorithm must be written to adjust fishing effort levels at the start of the total mortality process level.


Code/Algorithm Problems. Finding effort levels to meet some constraints using instantaneous rate equations can be tricky. The fact that the total instantaneous mort rate (Z) is the sum of several individual fishery rates can lead to an infinite number of acceptable solutions. For example, if there is an allocation goal to equalize the sum of all Treaty fisheries with all Non-Treaty fisheries within a region, there can be many solutions (unless allocations WITHIN the Treaty and Non-Treaty groups are also specified). I used an Excel spreadsheet with Solver to model four fisheries, four stocks, and two timesteps (using instantaneous rate equations), and found that one must be very specific about the constraints in order to have a unique solution. The bottom line is that it is easy to make the model framework compatible with instantaneous rate equations, provided there is never any need to adjust fishing effort levels to meet management constraints. If constraints must be met, it looks like the algorithms might be tricky.


SSM Parameter Estimation. The SSM uses instantaneous rate equations and provides estimates of catchability coefficients (q) for each fishery. However, up to this point we do not have a formal description of a SSM that includes multiple fisheries operating within the same region at the same time. The two fisheries modeled in the prototype SSM (Canadian Troll, US Troll) operate simultaneously, but in different regions. I think I know what that formulation will look like, but we need to formalize it. Once the model is formulated, we need to answer the following questions:

Q1. If the SSM is fit to data for individual stocks, should the forward simulation model use separate q's for each fishery and stock? The alternative is to fit the SSM to multiple stocks assuming a common q. Is this biologically appropriate (ie are all stocks equally vulnerable to a fishing gear)? Is this feasible? How will it be done?

Q2. Regardless of how Q1 is resolved, the q estimates for each fishery will reflect all regulations associated with each fishery during the time frame when the data were collected (eg size limits, bag limits, selective fishery rules). If we desire to simulate changes in size limits, bag limits, and selective fishery rules, how will the SSM be modified to reflect these changes? The Lawson and Sampson (1996) model might be appropriate.

Q3. In forward simulation, how will the SSM handle incidental mortalities? The q's estimated by the SSM reflect legal catches and will not provide information about incidental mortalities related to fishing. I believe all incidental morts will be absorbed by the instantaneous nat mort parameter (M) in the SSM, provided it is not assumed to be zero. I suppose we can use auxiliary data to partition M into all types of non-legal catch mortalities. How will we do this?

Q4. The q's estimated by the SSM will be instantaneous rates based on a daily timestep. If a forward simulation model is configured to operate on a weekly or monthly basis, must the model use instantaneous rate equations, or can we convert the q's (and associated effort levels) to what Ricker calls "conditional rates" (ie the fraction of a cohort that dies within a given time period) and use other equations?


Final Thought. It seems that the theoretical model we seek is some combination of the detailed migration sub-model of the SSM with the detailed harvesting sub-model proposed by Lawson and Sampson (1996). The question is: If we include the detailed harvesting sub-model into the SSM, can the SSM estimate all the parameters? Can we make some simplifying assumptions and use auxiliary data to make the problem tractable?

24. July 2, 1998 meeting minutes by Jim Norris, 7/19/98

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject:  Minutes of the July 2, 1998 meeting

Contents:

1. Attendance list.
2. Update on State Space Model (SSM) work (Ken Newman).
3. Report on harvest algorithm work group meeting (Jim Norris).
4. Update on model comparisons program (Jim Norris).
5. Experimental design recommendations for model comparisons (Jim Norris).
6. Report on June PFMC meeting (Robert Kope).


1. Attendance List.

Norma Jean Sands (ADFG)
Ken Newman (UI)
Jim Norris (UW)
Jim Scott (NMFS NWFSC)
Robert Kope (NMFS NWFSC)
Martin Liermann (NMFS NWFSC)
Gary Morishima (QMC)
Kristin Nason (NWIFC)


2. Update on State Space Model (SSM) work (Ken Newman).

Ken reported that he did not have any new Humptulips results because he did not have all the effort data yet (he has all the Humptulips CWT data). The effort data needed is for the commercial troll fleets, which account for about 95% of all the Humptulips CWT recoveries.

Jim Scott reported that he has completed getting the Canadian effort data (including reallocating the freezer boat data to individual weeks), but was still having trouble getting all the Washington and Oregon data. He was hopeful that he would have these data completed by the end of next week (July 10). Also, he is still trying to get the Alaska troll data broken down by week.

On a related data issue, Robert Kope reported that Cara Campbell is entering a single Lat/Lon coordinate for each recovery region (representing the center of the region) into her recovery site database. She is simply reading these coordinates off a chart. There was some discussion about the potential problem these data may cause when attempting to estimate ocean distributions. The main concern is that associating CWT recoveries with a single location may unrealistically weight more recoveries into a sub area. For example, a recovery region composed of three smaller sub-regions may, in essence, have all of its recoveries assigned to one of the sub-regions, when in fact they were distributed over all three sub-regions. Some type of sensitivity analysis may be required to evaluate the effects of this problem on ocean distribution and migration estimates.

Kristin distributed a memo from Jennifer Gutmann regarding stock migration rules for the Voights Creek stock. This included a table summarizing CWT recoveries for this stock for broods 1975 - 1994. During this period there were 25,918 individual tag recoveries in fisheries (44,520 total recoveries, including at the hatchery rack) representing 112,297 estimated recoveries in fisheries (i.e., adjusted for sampling effort). Over 28% of the tags were recovered in the WCVI area; only 6% were recovered in WA and OR ocean fisheries. About 1% were recovered in inside Canadian waters. About 32% of the recoveries came from Puget Sound areas 10, 11, and 11A, with Hood Canal and other Puget Sound areas accounting for less than 1%.

Two types of SSM migration rules for this stock were discussed. The first included five migration sections: north ocean, south ocean, Strait of Juan de Fuca, north Puget Sound and Georgia Strait, and South Puget Sound. The second included just three regions: ocean, Strait of Juan de Fuca, and inside waters. Ken noted that even the three section model would require quite a few parameters to estimate, and that he may need to experiment with different models to see which works best. There will undoubtedly be a trade-off between bias and variance.


3. Report on harvest algorithm work group meeting (Jim Norris).

Jim reported that attendance at the harvest algorithm work group meeting on June 16 included himself, Troy Frever, Jim Scott, Gary Morishima, Marianne McClure, Jennifer Gutmann, and Marianna Alexandersdottir. Some handouts related to this meeting are available for viewing at: /harvest/Jul2notes.pdf

At the last full committee meeting (May 26) Troy reported that the PSC Chinook Model shaker algorithm appeared to be incompatible with the new model framework. This turned out to be the case, and the problem was discussed by the work group. The main difficulty is that the current algorithm uses the somewhat arbitrary "Preterminal" and "Terminal" designation of a stock/fishery combination to compute some intermediate variables. For example, during the preterminal time step, age 2 and age 3 cohorts from Fraser River stocks are considered unavailable to the Fraser Net fishery (because they are designated "terminal" for that fishery), but are considered available for all other fisheries. This creates the unusual situation where the Fraser River net fishery is allowed to occur during the preterminal timestep and to harvest age 2 and age 3 fish from non-Fraser River stocks, but is prevented from harvesting age 2 and age 3 fish from Fraser River stocks.

Jim showed a table illustrating how the new model structure could be configured to represent the above ocean net fishery situation (see notes at: /harvest/Jul2notes.pdf). The ocean would be divided into several subareas. At the start of each year, all age 4 and age 5 cohorts from all stocks would reside in an open ocean region. The age 2 and age 3 cohorts would occupy appropriate near-coastal subareas. For example, these cohorts from the Fraser River stocks would occupy a Fraser River preterm area, and the Puget Sound stocks would occupy a Puget Sound preterm area. During the preterm timestep, all troll fisheries would operate in all preterminal areas (open ocean plus all subareas). The net fisheries would not operate in the open ocean area, and would be assigned to operate in all preterm subareas except the area associated with stocks from their river system. For example, the Fraser River net fishery would be assumed to operate in the WCVI, Puget Sound, and all other age 2 and age 3 preterm areas except the Fraser River preterm area (in which the age 2 and age 3 Fraser River cohorts are located, but which they are not allowed to harvest during the preterm timestep). Similar subarea designations would be required for terminal areas.

Jim noted that another option for the new model was to assign "Preterm" and "Term" flags to each cohort/fishery combination and write a special shaker algorithm. Jim Scott suggested that this option not be selected, and that the new model be configured as above to implement the PSC chinook model shaker algorithm. Jim Norris agreed to reconfigure the new model, rather than use the flag system.

Another algorithm discussed by the work group was the FRAM South Puget Sound allocation/escapement goal algorithm. Jim passed out a narrative description of that algorithm (available for viewing at: : /harvest/Jul2notes.pdf).

Jim reiterated that the current code design includes (1) Harvest Process that describes the relationship between fishing mortalities (legal and incidental), fishing effort, and fish abundance at the year, timestep, region, fishery, cohort level, and (2) Fishing Process that determines the fishing effort required at the year, timestep, region, and fishery level to meet some objective. Thus, the output from a Fishing Process is an amount of fishing effort which is input to a collection of Harvest Processes, each of which returns fishing mortalities. One conclusion from the work group meeting was that it appears that higher level processes should output constraints (e.g., fixed harvest rates, a quota, escapement goal) to Fishing Processes. The next higher level process would be a "Region Process" that controlled constraints on fisheries at the year, timestep, and region level. Jim noted that under this configuration interactions between fisheries operating within the same timestep and region would be handled by a Region Process, not individual Fishery Processes.

Jim Scott asked whether the above framework was consistent with the type of harvest rates that would be estimated by the SSM. His general concern was that the new model framework should be consistent with the SSM model estimates, and that these estimates might have inherit assumptions about interactions between fisheries operating within the same region. Jim Norris said he would look into this subject and report back.


4. Update on model comparisons program (Jim Norris).

Jim reported that computing parameters for the SSM and the Proportional Migration (PM) model required storing life history data for each school in the Base Model (e.g., timestep, location, abundance, fishery index, and region index). When he attempted to do this by creating a list of life history records for each school, the number of objects became too large for the Visual Basic development environment. Thus, he stored the life history data into MS Access database tables. This worked fine, and was faster. The program now plots the abundance and location of each school in the Base Model. The code to do the SSM and PM model parameter estimates has not been implemented yet.


5. Experimental design recommendations for model comparisons (Jim Norris).

Jim showed his planned BaseModel configuration for doing model comparisons. His goal is to approximate the passage of Voights Creek stock through several regions and fisheries. The configuration included the following:

Five regions: ocean, Juan de Fuca Strait, Area 6B9, South Sound, and River (spawning area).

Seven fisheries: ocean troll, Juan de Fuca sport and net, Area 6B9 sport and net, and South Sound sport and net.

One stock: Voights Creek surrogate with two cohorts--marked and unmarked.

Jim noted that in the existing FRAM model the sport fisheries are treated as "preterminal" for all stocks, even though they actually occur in the same geographic regions as the net fisheries, which are considered "terminal." After a lengthy discussion of the difficulties in duplicating this arrangement (the problem is similar to that of simulating the PSC Chinook Model Shaker Algorithm discussed earlier in the meeting), it was decided that for the purposes of the model comparisons, the sport fisheries should be considered as "terminal" in the FRAM model.

Jim presented some preliminary ideas about how the model comparisons program could be used to determine the inherent biases of each model (FRAM, PM, SSM). The discussion made clear that these ideas were not fully thought out yet, and he promised to provide a written description of the proposed methods.

Ken Newman presented some ideas for the experimental design. He noted the following six factors (or "treatments") that may be included in the analysis:

-- Fishery types (commercial troll, sport, gillnet, etc);
-- Harvest plan (effort by time and region);
-- Stock (e.g. may have up to 50 stocks);
-- Natural survival;
-- Initial distribution;
-- Migration (rates and pattern).

Jim Scott and Gary Morishima suggested two additional factors:

-- Growth;
-- Measurement errors.

Even with only two levels per factor that would create 256 possible combinations. Ken recommended reducing this to 32 (or 16, if possible) combinations with the following factor levels:

-- 1 set of fisheries;
-- 2 harvest plans;
-- 2 stocks;
-- 2 natural survival rates;
-- 2 initial distributions;
-- 2 migration patterns.

Once the treatment combinations are determined the evaluation analysis proceeds in three steps.

Step 1. Generate baseline data. Generate 6 baseline years of data by running 6 simulations of the survival, fishing, and migration processes. The data from these six simulations would be used by the different models to estimate parameters and calibrate the model. One approach (per simulation) is to fix the stock and release number and then:

 -- randomly select survival rates according to a uniform distribution between the low and high values from the above survival factor;
-- randomly generate a fishing plan (e.g., take the 2 plans in the fishing plan factor, randomly pick a plan,  then use a  Poisson distribution to randomly perturb the effort in each time-area cell;
-- randomly distribute the initial survivors in space (uniform, truncated normal, etc);
-- simulate "some" migration process.

Step 2. Generate the "truth." For each treatment combination simulate 100 random sets of data (catches, abundances) at finest space-time resolution. These 100 simulations will represent the pre-season uncertainty even if one knew the parameters.  You could then calculate the average catches, abundances, escapement to arrive at a single number. If one wants a single number for the "truth", running the true model in deterministic mode is not going to give the same result as the expected value for runs made in a stochastic mode. For each stock/fishery/time/region cell compute the distribution parameters of the 100 simulated catches.

Step 3. Simulate each treatment. The modelers have the 6 base years for calibration--but these are independent of step 2 above.  For the model evaluations, the modelers would only get the stock, release number, catch (??) and effort by time and area used in step 2. They would not see any of the 100 simulations. The modelers run their models to predict what the catch and escapement will be under each treatment combination in step 2.

Step 4. Evaluate performance. The qualities of the models would be measured by distance between their predicted catches by time and area (and escapement) and the "true" catches. There's several ways to define distance:

 -- Method a.

    Sum_{time} Sum_{area}

    Catch-bar_{time, area}*(percentile_{time, area}(model) - 0.5)^2

where Catch-bar is the average of the 100 simulations and percentile_{time, area)(model) is the percentile the modeler's forecasted catch falls into amongst the 100 simulations.

 -- Method b.

    Sum_{time} Sum_{area}

   (Catch_{time, area}(model) - Catch-bar_{time, area})^2
    ------------------------------------------------------
                Catch-bar_{time, area}

where Catch_{time, area}(model) is the modeler's forecast. This measure is just like a Chi-square goodness of fit.

-- Method c.

   Sum_{time} Sum_{area}

  (Catch_{time,area}(model) - Catch-bar_{time,area})^2

 Ken recommended Method a because it gives higher weights to larger catches. There was a general discussion of these suggestions, but no specific analysis plan was adopted.


6. Report on June PFMC meeting (Robert Kope).

Robert reported that the June PFMC meeting dealt mostly with non-salmon issues, and had no salmon issues of significant concern to this committee.


7. Next Meeting.

The next meeting will be held on July 30, 1998 at 9:00 am at NMFS Montlake.

23. Minutes of May 26, 1998 meeting, 7/06/98

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject:  Minutes of the May 26, 1998 meeting

Contents:

1. Attendance list.
2. Update on code development (Troy Frever).
3. Update on FRAM vs PM Model comparisons (Jim Norris).
4. Elements of decision theory and relevance to preseason planning (Ken Newman).
5. Estimating migration parameters in more complex spatial frameworks (Ken Newman).
Newman).
6. Combining multiple effort measures (Ken Newman).
7. Applying the SSM model to chinook salmon (Ken Newman).
8. Data issues (Robert Kope, Jim Scott).
9. Work schedule (Jim Scott).
10. Meeting Schedule and Milestones (Jim Scott).


1. Attendance list.

Din Chen (CDFO)
Norma Jean Sands (ADFG)
Carrie Cook-Tabor (USFWS)
Rich Comstock (USFWS)
Ken Newman (UI)
Jennifer Gutmann (NWIFC)
Jim Norris (UW)
Troy Frever (UW)
Jim Scott (NMFS NWFSC)
Robert Kope (NMFS NWFSC)
Cara Campbell (NMFS NWFSC)
Mindy Rowse (NMFS NWFSC)
Martin Liermann (NMFS NWFSC)
Steve Lindley (NMFS SWFSC)
Gary Morishima (QMC)
Jim Anderson (UW)


2. Update on code development (Troy Frever).

Troy reported that he is still in the process of translating the harvest algorithms into the new format. The new format has a variable number of regions and timesteps, whereas the old format computes harvests in fixed "preterminal" and "terminal" timesteps/regions. This task has proven to be more complicated than anticipated.

Troy noted that he is considering changing the way input data is brought into the model. In the prototype model Jim Norris has been using for model comparisons, the input data are stored and manipulated outside the model in Access databases. Using this type of input format may reduce the amount of code we have to write and helps to meet one of the project objects, namely interfacing with "outside" databases.

A specific problem Troy is currently working on is the shaker algorithm. The old code has only four cohorts per stock--one for each age class. The new model format will have an unlimited number of cohorts. After some group discussion to clarify the problem, it was decided that there would not be a problem if the current algorithm was used to compute shakers for each stock and age class within a timestep and region, and then those shakers were further allocated to the individual cohorts within each age class based on their relative abundances within that region at the start of the timestep.


3. Update on model comparisons (Jim Norris).

Jim briefly reviewed the different migration assumptions, algorithms, and estimation methods used by the three models in question (FRAM, PM, and SSM) and the model comparison approach he proposed at the last meeting. He stated that after the discussion at the last meeting, he decided that the model comparison program should be re-written using the object oriented capabilities of Visual Basic. This would serve two purposes. First, it would provide a prototype within which code design for the new model framework could be tested. Second, it would facilitate a more structured method of comparing the models.

The new model comparison code has a "ModelEvaluator" class that contains several instances of the "Model" class: a BaseModel, a FRAMModel, a PMModel, and a SSMModel. Each instance of the Model class has a "State" class which contains a list of Region, Stock, and Fishery objects. Each Model class object also has several process manager objects (e.g., NatMortManager, MigrationManager) that perform operations on the State during each timestep.

For model comparison analysis the BaseModel's state and process managers are configured to simulate the true state of nature. A daily timestep is used and each process can be simulated using a different algorithm each day, or group of days. The data used to configure a model are stored in Access database files. The user interface allows the user to specify which database files to use and to specify how much variability the different processes will have (e.g., none, low, med, high).

When the BaseModel is initialized, the user interface displays schools from the unmarked and marked cohorts of a stock as tic marks along a linear migration path. The height of each tic mark represents the abundance of that school. Vertical lines along the migration path delineate region and fishery locations. When the BaseModel is run, the school tic marks change position each day along the migration and change height as they suffer natural and fishing mortality. Cumulative catches in each fishery are shown also.

Each instance of the Model class has a DataManager object that keeps a list of true catches and escapements. When the BaseModel run is completed, the user clicks a "Measure Data" control button that directs the DataManager to create a list of measured catches and escapements. The amount of measurement error (none, low, high) is controlled via the GUI.

The ModelEvaluator class has objects that contain methods for analyzing measured data created by the BaseModel. For example, when the user specifies the data aggregation resolution (i.e., number of days per aggregation period) and clicks the "Estimate FRAM" button, the FRAMAnalyst object estimates FRAM parameters and stores them in Access data files. When the "Run FRAM" button is clicked, the FRAMModel is configured, initialized, and run and displayed in the GUI. When the "Run Estimation" button is clicked, the BaseModel and FRAMModel are run simultaneously so the user can see more clearly how the FRAM model simulates the true state of nature.

At present the code for the PM and SSM models are not in the program. Jim felt that he could implement the PM code without too much trouble, but thought implementing the SSM estimation code would be too difficult. Initially, he plans to send "measured data" output to Ken Newman via the internet, have Ken estimate parameters, then use those results to configure the SSMModel to run simultaneously with other models in the model comparisons program. Jim Scott requested that Jim still try to integrate Ken's estimation code into the model comparisons program.

There was a brief discussion of how to simulate estimating FRAM and PM parameters from multiple years of data instead of a single year. Rich Comstock said that the PM model uses the median of the estimated values. Jim Scott and Gary Morishima stated that FRAM uses a weighted sum of CWT recoveries over several years to create a single "base year" of CWT data, where the weights for each year are the relative abundances of each year class.

Jim Norris raised the question: How are the harvest rates used in the PM model estimated? He noted that the PM model does not have a specific estimation procedure for harvest rates. Jim said that for the model comparison program he planned to use either the FRAM harvest rate estimates, or to give PM the "true" harvest rates used by the BaseModel. Another option would be to use the SSM estimates.

Estimating the migration mapping between fisheries for the PM model also was discussed. Jim Norris said he planned to have the MigrationManager in the BaseModel keep a record of when fish moved from one fishery region into another. These data would provide the "true" migration mapping information. Gary Morishima suggested that the PMAnalyst object could estimate migration speeds from the measured BaseModel data, from which a migration mapping could be estimated.


4. Elements of decision theory and relevance to preseason planning (Ken Newman).

Ken reviewed the basic elements of decision theory:
-- action space (e.g., a set of choices for management plan);
-- possible states of nature (e.g., survival rates, migration patterns);
-- loss function (a function of the action taken and the particular state of nature).

Ken showed a simple discrete case example to illustrate two types of decision rules:
-- minimax rule (i.e., choose the action which has the smallest maximum loss);
-- Bayes rule (i.e., choose the action which has the smallest expected loss).

Ken stated that the three most difficult issues in applying decision theory to salmon fishery management are:
-- selecting a loss function (e.g., how to resolve conflicting objectives of escapement levels, catch quotas, and catch allocations);
-- high degree of uncertainty in the states of nature;
-- virtually infinite number of possible actions.

To solve the 2nd issue, for a given fishery management plan (action), one can use the hierarchical state-space model to simulate different state-space model parameters. Given these parameters, simulate the abundance and catches, calculate the loss function for each simulation, and average the losses to get a measure of the expected loss. 

 For solving the 3rd issue of an infinite action space, to finding an "optimal" action, Ken suggested that a technique called "simulated annealing. The basic idea of simulated annealing is to start with an input action (i.e., management plan expressed as a set of fishing effort matrices) at time t, call it a(t), and then simulate the plan N times. From these N simulations calculate the expected loss function and let this function be the objective function to be minimized. Then randomly perturb the a(t) slightly to get a(t+1), simulate the new plan N times, and compare the two objective functions. Choose the best action (i.e., plan with smallest objective function) and repeat the process until no better action can be found.

There was considerable discussion about this issue. Gary Morishima wanted to know why we were even considering this type of problem. Jim Norris explained that in order to simulate management actions over many years (a NMFS objective for ESA type analyses), the computer program would require some method of simulating management decisions that currently are handled within the political arena during the preseason planning process (e.g., trying to simultaneously satisfy multiple catch allocation requirements and multiple escapement goals).

Jim Scott pointed out that the current version of FRAM has an algorithm for computing terminal area catches such that escapement goals and catch allocations are satisfied. In essence, this algorithm subtracts the escapement goals from the terminal run sizes entering the terminal area and then allocates the available catch among Treaty and non-Treaty terminal area fisheries. This algorithm operates over several timesteps.

It seemed to be the consensus of the group that there needs to be a distinction between using an algorithm to (1) find solutions to implement a given set of decision rules, and (2) searching for an "optimum" harvest plan.

Ken's handouts on this subject are available at: /newman/dectheory.pdf


5. Estimating migration parameters in more complex spatial frameworks (Ken Newman).

Up to this point Ken's SSM has used only a single line segment to describe a salmon migration path. While this may be adequate for coastal stocks, it is not adequate for stocks entering inside waters, such as Puget Sound or Georgia Strait. Ken described a theoretical probability model to handle this more complex spatial framework. The basic idea is to describe the position of an individual fish as a 2 dimensional random variable [x(t), p(t)], where x(t) is a label for a region the fish is in at time t and p(t) is a particular location within x(t). To implement this type of model it will be necessary to develop a set of "stock rules" describing the possible regions each stock will potentially migrate through. The details of the model can be found in Ken's handouts on this topic, which are available at:

/newman/extensions.pdf


6. Combining multiple effort measures (Ken Newman).

Ken made the following "first cut" recommendations regarding combining multiple effort measures.
-- Given two fisheries with differing temporal resolution: partition the coarser data to the finer resolution, using smoothing techniques;
-- Give two fisheries with differing spatial partitioning, one of which is nested in the other: partition the coarser resolution fishery data to the finer scale, using smoothing techniques;
-- Given two fisheries with differing spatial partitioning, neither of which are nested in the other: assuming one is somewhat coarser than the other, first smooth the coarser fishery data and then partition the smoothed values according to the finer spatial resolution;
-- Given two fisheries with differing temporal resolution and non-nested spatial partitioning: first temporally partition the fishery with coarser temporal data, then spatially partition the fishery with coarser spatial data, in both cases using smoothing techniques.

Further comments on this subject are available at:

/newman/extensions.pdf


7. Applying the SSM model to chinook salmon (Ken Newman).

When applying the SSM to coho salmon, it is assumed that all individuals within a cohort mature during the same year. This assumption will not be valid for chinook salmon. Ken made several suggestions for modifying the SSM to include a maturation process. These suggestions can be viewed at:

/newman/extensions.pdf


8. Data issues (Robert Kope, Jim Scott).

Robert reported that he submitted a letter to the co-chairs of the PSC Technical Committee on Data Sharing (Norma Jean Sands and Susan Bates) requesting that latitude and longitude values be assigned to all recovery site locations used in the CWT database. Norma noted that she had not received the letter yet.

Jim passed out a table showing the different CWT sampling period types used by different agencies in the PSMFC database:

ADFG
-- Commercial Troll and Net: Reported by statistical week (Sunday start). Sample fraction is estimated by open period;
-- Sport: Marine data reported biweekly; Freshwater (?)

CDFO
-- Commercial Troll and Net: Reported by statistical week (Sunday start). Weeks for Jan, Feb, & Dec recoded as "40." 1981 data has a Monday start.
-- Sport: Marine reported by calendar month; Freshwater (?).

WDFW
-- Commercial Troll and Net: Reported by statistical week (Monday start), except troll data prior to 1990 reported by statistical month.
-- Sport: Marine and freshwater reported by statistical month.

ODFW
-- Commercial Troll and Net: Reported by statistical week (Monday start);
-- Sport: Marine and freshwater reported by statistical month.

CDFG
-- Commercial Troll: Reported semi-monthly;
-- Sport: Marine reported semi-monthly; Freshwater (?).

Jim also reported on several other questions:

Q1. How is effort reported in the Canadian database?

In the Canadian database commercial effort and CWT recoveries are generally reported by week (Sunday start), and the first Saturday of the year by definition terminates week 1. All weeks in Jan, Feb, and Mar are recoded to occur in a single period. Depending on source of the data, the week may be reported in the form MMW (e.g., 061 for the first week in June), or consecutively from 1 through the last week in the year.

Q2. How is effort computed by time period for freezer boats in the troll fishery?

Recall that the catch and CWT recoveries from freezer boats are reallocated based upon the catch in the day and ice boats. Unfortunately, CDFO does not currently maintain a database with a similar reallocation of fishing effort. Estimates could be obtained easily from the reported catch and effort of day, ice, and freezer boats.

Q3. Are species specific effort data available for Canadian fisheries?

Jim was unable to locate any species specific effort data for Canadian fisheries.

Q4. How are species-specific effort data computed for Washington fisheries?

Species-specific effort data are computed in two steps. First, fisher or boat numbers are used to identify unique landings within a day. This eliminates the potential confounding effects of multiple landings by a single fisher within a day. Second, only the unique landings that sold fish of the species of interest are added to the effort that the specific effort. For example, if there were 20 unique landings, but only 19 sold coho salmon, the coho-specific effort would be 19.

Jim's major conclusions are:

-- Need to re-compute the effort for freezer boats;
-- Species-specific effort is not very useful and we need to use total effort.


9. Work schedule (Jim Scott).

Jim organized four current tasks as follows:

Task 1. Model Code (Troy lead). Need to implement the harvest code, including shaker and CNR mortalities and different policy types (e.g., selective fisheries, escapement goals, catch allocations). The prototype will duplicate the PSC chinook model code without a GUI. It will include escapement goal management, but not selective fisheries or CNRs. Three more months are need to finish this task. Jim Norris will organize a harvest algorithm work group to help resolve algorithm questions (meeting date set for June 16) with possible participants: Jim Norris, Jim Scott, Gary Morishima, Jennifer Gutman, Jim Packer, Robert Kope.

Task 2. Model Comparisons (Jim Norris lead). Need to get a fully functioning simulator completed and develop an experimental design for comparing models.

Task 3. Data Issues (Jim Scott, Jennifer, Cara, Din, and Carrie will coordinate). Need to get Humptulips data and Voight's Creek data for 86-91. Jim Norris agreed to send a tag code list to Cara. Din will get tag codes for Big Qualicum stock. Jennifer will provide NWIFC mapping of PSMFC database recovery codes to aggregated recovery regions for analysis.

Task 4. Spatial Complexity (Jennifer and Din). Ken's method of estimating migration paths for "inside" stocks will require "stock rules", or migration path definitions, for these stocks. Jennifer will create a set of rules for Voight's Creek; Din will create rules for Big Qualicum.


10. Meeting Schedule and Milestones (Jim Scott).

The following meeting schedule and milestones were established:

July 2 (Thursday):
-- Humptulips results;
-- Model code recommendations from the work group;
-- Model comparison program ready;
-- Experimental design recommendation for model comparisons.

July 30 (Thursday):
-- Model comparison results;
-- Voight's Creek and Big Qualicum results;
-- State space model preliminary decision (will it work better?).

August 27 (Thursday):
-- Look at other coastal stocks;
-- Another review of stock rules;
-- State space model decision refinement.

October 6 (Tuesday):
-- Code prototype completed;
-- Results for many stocks and years!

22. Minutes of the March 16, 1998 meeting, 6/01/98

To: NMFS Salmon Model Committee and interested parties.

FROM: Jim Norris

Subject:  Minutes of the March 16, 1998 meeting


Contents:

1. Attendance list.
2. Update on code development (Jim Norris, Troy Frever).
3. Data Group meeting report (Robert Kope, Ken Newman).
4. Report on FRAM vs PM Model comparisons (Jim Norris).
5. Report on PFMC meeting in San Francisco (Robert Kope).
6. COM vs Meta-Modeling (Troy Frever).


1. Attendance list.

Din Chen (CDFO)
Norma Jean Sands (ADFG)
Carrie Cook-Tabor (USFWS)
Rich Comstock (USFWS)
Steve Caromile (WDFW)
Jim Packer (WDFW)
Marianna Alexandersdottir (NWIFC)
Kristin Nason (NWIFC)
Ken Newman (UI)
Allan Hicks (UI)
Jim Norris (UW)
Troy Frever (UW)
Christine Mounchang (UW)
Robert Kope (NMFS NWFSC)
Cara Campbell (NMFS NWFSC)
Mindy Rowse (NMFS NWFSC)
Martin Liermann (NMFS NWFSC)
Paul Spencer (NMFS SWFSC)
Steve Lindley (NMFS SWFSC)
Marianne McClure (CRITFC)


2. Update on code development (Jim Norris, Troy Frever).

Jim reported that designing a code structure to handle all possible types
of harvest algorithms, especially those that may require some type of
iteration over multiple time steps, is the biggest challenge and is not yet
fully resolved. The current design makes the assumption that harvest rate
scalars (or effort scalars) will be the only control variables allowed to
change during an iteration routine. Other management controls, such as size
limits, catch ceilings, and escapement goals, will be assumed fixed during
iterations. The main design questions are: (1) which objects will be
allowed to change the control variables in order to satisfy conflicting
objectives, such as catch ceilings, catch allocations, and escapement
goals; and (2) in what order will iteration algorithms be performed.

Jim showed some results of two prototype models he designed using Excel and
its "Solver" routine ("Solver" is an add-in Excel routine that solves
linear and non-linear optimization problems). One model had four stocks
harvested by a single fishery in a single time step. Each stock had a
separate harvest rate. Two types of catch equations could be employed to
compute the catch for a single stockone linear and the other non-linear.
The catch equations were user-defined functions with abundance, base
harvest rate, and relative effort scalar as input arguments.

  Catch = Abundance*Base_HR*HR_Scalar

  Catch = Abundance*(1-exp(q*HR_Scalar))

where

  q = -ln(1-Base_HR)

The control variable was the relative effort level for the fishery.  Four
problems were defined as individual Solver models:

 unforced catch quota;
 forced catch quota;
 fixed escapement goal for one stock;
 fixed escapement goal for all stocks.

Solver found solutions for each of these problems. This prototype showed
how a non-linear optimization routine can be set up to solve catches at the
timestep/region/fishery level given common constraints. It also illustrated
the importance of clearly defining and separating the functional equations,
parameters, objective function, control variables, and constraints.  

The second prototype had four stocks, four fisheries, and four timesteps
and used a non-linear catch equation. Again, several objectives were set up
as Solver models with the control variables being the relative effort
levels in each fishery in each time step. Jim showed results of a model
that maximized catch and satisfied the following objectives and constraints:

 catch in fishery 1, timestep 1 equal to 500 (i.e., simple catch ceiling);
 sum of catches in fishery 2, timesteps 1 and 2 equal to 800 (i.e.,
multi-phase catch ceilingpart one);
 relative effort levels in fishery 2, timesteps 1 and 2 are equal (i.e.,
multi-phase catch ceilingpart two);
 minimum escapement goals for all stocks;
 no relative effort levels can be lower than 0.25.

This prototype illustrated that several objectives could be satisfied by an
algorithm whose control variables spanned all timesteps and fisheries. In
other words, all objectives were satisfied simultaneously, rather than by
separate algorithms as is currently done in the PSC Chinook Model. This
prototype also pointed out the need to set constraints on relative effort
scalars when maximizing total catch. Jim reported that without the minimum
0.25 constraint on relative effort, the solution often involved completely
eliminating some fisheries.

Jim said that Solver could not find a solution when additional constraints
were added.

Troy presented a diagram showing the planned harvest code design. At the
timestep/region/fishery level, a catch equation (or more complicated
algorithm) will have the following:

 inputs about the current state of the system (e.g., physical
characteristics of the region, biological characteristics of the region,
abundances of all cohorts in the region at the start of the timestep,
fishery regulations, and gear restrictions);
 an input relative effort level (or relative harvest scalar);
 functions and subroutines containing the functional equations to compute
fishing mortalities by cohort.

At the same timestep/region/fishery level, a policy algorithm (designated a
level zero policy) will have:

 a control variable (i.e., the relative effort level for that
timestep/region/fishery);
 a list of constraints (e.g., quotas, escapements, allocations);
 functions and subroutines necessary to adjust the control variable such
that the constraints are satisfied.

Higher level policies can be defined to span multiple timesteps, regions,
or fisheries and also will have a control matrix, constraints, and
functions and subroutines. The idea is to create a nested hierarchy of
policies.

Robert Kope pointed out that these design ideas are fundamentally different
from existing programs which essentially do most of the iteration work by
trial and error without a formal statement of objectives. Instead, the
system is probed by educated guesswork until a politically acceptable
solution is determined.

Ken Newman suggested that a technique called "simulated annealing" might be
able to solve the more complicated problems that Jim said Excel Solver
could not solve. There was some discussion about using an approach similar
to the "Stock Synthesis Model" for groundfish in which the objective
function is divided into separate components, each with a weighting factor.
For example, catch allocation constraints could be weighted differently
from escapement goals.


3. Data Group meeting report (Robert Kope, Ken Newman).

Robert and Ken reported on a meeting of the Data Group held on February 5,
1998. Attendees were:

Ken Newman
Robert Kope
Jim Scott
Jim Norris
Marianna Alexandersdottir
Cara Campbell
Carrie Cook-Tabor
Judy Cress
Martin Liermann

The data needed for Ken to perform SSM estimates are:

 1986-1991 CWT recoveries for Humptulips area coho releases (Cara);
 release numbers (Cara);
 1986-1991 commercial troll effort (Carrie).

Cara and Carrie reported that they still need to coordinate with Jim Scott
on how to handle some data, especially data that span multiple regions.

At the Feb. 5 meeting Robert suggested assigning a single geographic
coordinate to each CWT recovery based on the boundaries of the recovery
region reported in the CWT database. For example, the single coordinate
might be the average of the coordinates defining the vertices of the
regions polygon. There was some discussion of this idea. One potential
problem is that the single coordinate might wind up being in a sub-region
that had no fishing effort during the period the recovery was made. For
example, a CWT reportedly recovered somewhere along the Washington coast,
might have a single coordinate assigned in the central coast which was not
open when the recovery was made.

It was suggested that Robert draft a written proposal that could be
submitted to relevant entities for comment (e.g., Ken Johnson at PSMFC,
Data Sharing Committee, Data Standards Committee).

At the Feb. 5 meeting Jim Norris suggested that the next step for expanding
the SSM technique should be to focus on a single Puget Sound stock that had
lots of CWT recoveries and a branching migration pattern. Marianna
Alexandersdottir suggested that a good stock would be the Voights Creek
hatchery stock (Puyallup River).

There was some discussion of how to encourage the development of an
accepted and standardized coastwide effort database. Apparently there has
been some of talk of doing this in the past, but no one was sure what the
current status was. It was suggested that a letter be drafted to the Data
Sharing Committee pointing out the need for such a database.


4.  Report on FRAM vs PM Model comparisons (Jim Norris).

Jim demonstrated an analysis tool (a computer program) he is developing (in
Visual Basic) to compare FRAM, the PM model, and SSM. The spatial structure
of the model is a linear migration path running from coordinates 0 to 1000.
The timestep is one day. The model has two cohorts (one marked, one
unmarked) and any number of fisheries. On initialization, the cohorts are
distributed along the migration path in either a deterministic or
stochastic manner. At the end of each day, individual schools of fish from
each cohort move along the migration path. Fisheries defined over specific
regions along the migration path and are active on specific days. The
migration and harvesting processes can have unique functions and parameters
on each day.

The analysis process will proceed as follows:

a.  create synthetic daily catch data for a base period;
b.  measure and aggregate the daily catch data over some time interval
(e.g., weekly, monthly);
c.  estimate FRAM, PM, and SSM harvest rate and migration parameters using
the aggregated data;
d.  create synthetic daily catch data for an adjusted model (i.e., new
starting abundances, new harvest rates, new migration parameters);
e.  estimate catches for the adjusted model using FRAM, PM, and SSM models
and estimated parameters;
f.  compare estimated catches with synthetic daily catch data.

Jim stated that this approach will allow three sources of error to be
evaluated: model misspecification, measurement error, and process error.
There was a lengthy discussion about how different types of errors would be
evaluated by these methods.

At present the program only performs steps a and d, but does have a
user-friendly GUI. Jims goal is to make the tool available on the web site.

Rich Comstock asked about using the coho run reconstructions for 1986-1991
to compare FRAM, PM, and SSM. The idea here would be to use data from five
of the years to estimate parameters and predict the catches for the sixth
year. There was general agreement that this would be a valuable exercise
also and would complement the synthetic data approach.


5. Report on PFMC meeting in San Francisco (Robert Kope).

Robert reported that the PFMC did not have any selective fisheries under
consideration for 1998. The harvesting options under consideration call for
either a 30% reduction in all (i.e., including Alaska and Canada) harvest
impacts on Snake River fall chinook or a 50% reduction in PFMC region impacts.


6. COM vs Meta-Modeling (Troy Frever).

Troy gave a brief report on two modeling approaches that may have some
concepts we can apply the new model. The purpose of both approaches is to
facilitate development of complex computer models, such as those used for
ecosystem level systems.

The COM (Component Object Model) approach allows model components to be
shared independent of the programming language they were written in, or the
location of the model's executable program. The idea is to have interfaces
that pass outputs from one program into another program as inputs. For more
info on COM see:

http://wally.usfs.auburn.edu/conference/

The following abstract describes the meta-modeling approach:

The development of complex models can be greatly facilitated by the
utilization of libraries of reusable model components. In this paper we
describe an object-oriented module specification formalism (MSF) for
implementing archivable modules in support of continuous spatial modeling.
This declarative formalism provides the high level of abstraction necessary
for maximum generality, provides enough detail to allow a dynamic
simulation to be generated automatically, and avoids the "hard-coded"
implementation of space-time dynamics that makes procedural specifications
of limited usefulness for specifying archivable modules. A set of these
modules can be hierarchically linked within the MSF formalism to create a
MSF model. The MSF exists within the context of a modeling environment ( an
integrated set of software tools which provide the computer services
necessary for simulation development and execution ), which can offer
simulation services that are not possible in a loosely-coupled "federated"
environment, such as graphical module development and configuration,
automatic differentiation of model equations, run-time visualization of the
data and dynamics of any variable in the simulation, transparent
distributed computing within each module, and fully configurable space-time
representations. We believe this approach has great potential for bringing
the power of modular model development into the collaborative simulation
arena.

To learn more about Meta-Modeling see:

http://kabir.cbl.umces.edu/SME3/MetaModelsF.html


7. Next meeting is scheduled for May 26, 1998 at NMFS Montlake.

21. Jan. 20, 1998 minutes by Jim Norris, 2/06/98

TO: NMFS Salmon Model Committee and interested parties.

SUBJECT: Minutes of the January 20, 1998 meeting

FROM: Jim Norris

Contents:

1. Attendance list.
2. Update on maturation process code design (Jim Norris; Troy Frever).
3. Update on harvesting algorithm code design  (Jim Norris; Troy Frever).
4. Update on migration algorithms report (Jim Norris).
5. Report on Fisheries Oceanography symposium at UBC on Jan 8, 1998 (Jim
Norris).
6. Report of further model comparisons (Jim Norris).
7. Report on PM Model (Peter Lawson).
8.  Discussion of data issues for SSM Model (Group).
9.  Next meeting.


1.  Attendance.

Jim Scott (NMFS)
Steve Lindley (NMFS - SW)
Pete Lawson (NMFS)
Cara Campbell (NMFS)
Mindy Rowse (NMFS)
Martin Liermann (NMFS)
Jim Norris (UW)
Troy Frever (UW)
Judy Cress (UW)
Jennifer Gutmann  (NWIFC)
Kristin Nason (NWIFC)
Rich Comstock (USFWS)
Carrie Cook-Tabor (USFWS)
Norma Jean Sands (ADFG)
Marianna Alexandersdottir (WDFW)
Din Chen (CDFO)
Marianne McClure (CRITFC)


2. Update on maturation process code design (Jim Norris; Troy Frever).

Jim reviewed the maturation code design problem identified at the Nov 97
meeting. He began by reviewing what we think we know about the biological
process of maturation. Our understanding is that for almost all salmon
stocks, individuals within a cohort may or may not mature and return in a
given year. At the individual fish level, the decision to return is
determined by both genetic factors and environmental factors. On the
genetic side, each salmon species has a fairly consistent maturation
process (e.g., chinook mature over several years; coho tend to mature all
in same year). On the environmental side, individual fitness (e.g., general
body chemistry) during the spring or early summer may determine which
individuals actually decide to migrate home, and when.

The above biological realities imply that at any given time and region a
cohort may be divided into two distinct components (immature and mature)
that will have different migration patterns. The modeling question is
whether or not it is necessary to model each component of the cohort
separately.

If we assume no fishing mortality, ignore mature and immature status,
divide time and space into discrete units, and track the movement of each
individual fish, at any time step we could determine the fraction of the
fish that move from each cell to each other cell. From these data we could
define transition matrices for each time step that fully describe the
movement of the entire cohort, regardless of maturation status. In this
perspective (no fishing), it is not necessary to track mature and immature
fish separately.

However, the fishing process alters the relative composition of
immature/mature fish within a time/area cell. Since these two components
have different migration patterns, the migration pattern for the combined
cohort (immature and mature) can not be expressed by a single transition
matrix.

A new maturation process was added to the main engine just before the
migration process. The main loop in the computation engine now looks like
this:

YearInit(year);

TimeStepIter time(clock);
    while (++time) {
          naturalMortalityManager.takeNaturalMortality(clock);
          fisheryManager.takeFishingMortality(clock);
          spawningManager.spawnCohorts(clock);
NEW ==>   maturationManager.maturateCohorts(clock);
          migrationManager.migrateCohorts(clock);
}

for (i = 0; i < Stocks.num(); ++i) {
    Stocks[i].year_wrapup();
}

Jim noted that it may seem odd to place the maturation process after the
spawning process. This was done because, for modeling purposes, the
maturation process is most closely linked to the migration process rather
than the spawning process. The ordering doesn't matter much because the
spawning process will only be activated in certain time steps and/or=
 regions.

At the Nov 97 meeting two approaches to coding the mature and immature
components of a cohort were discussed. One was to treat the immature and
mature components as separate cohorts; the other was to keep separate
abundance vectors for immature and mature components within the same
cohort. The first approach was selected.

At config time, all the cohorts can be created, or they can be created as
needed. Mature/Immature will be a part of the CohortID. The abundance
vectors for mature cohorts created at config time will be initialized at
zero, and will be filled in during the maturation process at appropriate
time steps.

This approach will require twice the number of cohorts as previously
planned, and twice the data storage requirements (assuming we need to track
data for immature and mature cohorts separately).

Jim showed a draft SSM formulation that includes the maturation process.
Recall that the SSM without maturation is:

    n(t) = M(t) S(t) n(t-1)

    c(t) = H(t) n(t)

This formulation assumes that the cohort abundance being tracked is the
mature cohort. Or stated another way, this model assumes that the
maturation process occurred prior to time t = 0. This may be a reasonable
assumption for coho, but not for chinook.

Now consider tracking the mature and immature components separately. Let

    n'(t) = immature abundance vector at time t;

    n"(t) = mature abundance vector at time t;

    M'(t) = transition matrix for immature fish;

    M"(t) = transition matrix for mature fish;

    m(t)  = maturation matrix at time t,

where m(t) is a diagonal matrix with diagonal elements equal to the
maturation rates by region for time step t and all other elements = 0. If
we assume that mature and immature fish have the same survival and harvest
matrices (in practical terms this probably means assuming they have the
same size, vulnerability to gear, feeding behavior, etc), we can write the
SSM as:

    n'(t) = M'(t) [I - m(t)] S(t) n'(t-1)

    n"(t) = M"(t) [S(t) n"(t) + m(t) S(t) n'(t-1)]

    c(t)  = H(t) [n'(t) + n"(t)]

Jim passed out email from Ken Newman describing an alternative formulation.
There was no further discussion on this subject.

-------- Excerpted from Ken Newman email --------

In terms of SSM formulation, I think maturation can be easily incorporated.
The estimation problem is a different matter.  Some notation:

  K    = #areas stock can move into
  n[t] = a 2K x 1 abundance vector at time t=20
        w/ the first K components being immature, n[t,i]
        + second K components being mature, n[t,m]
  M[t] = a 2K x 2K block diagonal migration matrix at time t
  S[t] = a 2K x 2K diagonal survival matrix at time t
  P[t] = a 2K x 2K "maturation" matrix at time t=20
  c[t] = a Kx1 catch vector at time t
          -doesn't distinguish between immature and mature fish
  H[t] = a K x 2K harvest matrix

Suppose there's a certain point in time, t*, where the maturation "switch"
is flipped on for the stock- for t<t* the stock has same migration
probabilities, for t>= t* there's a different set of migration=
 probabilities.

Then for t<t* (ignoring error terms)

  n[t]     =      M[t]             S[t]            P[t]      n[t-1]

|n[1,t,i]|  = |             | |               | |         | |n[1,t-1,i]|
|n[2,t,i]|    |  M[t,i]   0 | | S[t,i]      0 | |  I     0| |n[2,t-1,i]|
|...     |    |             | | ...           | |         | |...       |
|n[K,t,i]|    |             | |               | |         | |n[K,t-1,i]|
|0       |    |             | |               | |         | |0         |
|...     |    |    0      0 | |       0     0 | |  0     0| |...       |
|0       |    |             | |               | |         | |0         |

  c[t]    =      H[t]                          n[t]
|c[1,t]|  = |h[1,t,i]         0   0 0 .. 0 | |n[1,t,i]|
|c[2,t]|    | 0   h[2,t,i]    0   0 0 .. 0 | |n[2,t,i]|
|...   |    |...             ..   ..  .. . | |...     |
|c[K,t]|    | 0   0 .... h[k,t,i] 0 0 .. 0 | |n[K,t,i]|
                                             |0       |
                                             |..      |
                                             |0       |

Then at t*, when maturation begins, have to insert different migration
sub-matrix in M[t], possibly different survival sub-matrix in S[t], the
maturation probabilities into P[t] (presumably constant by area, say p),
and fill in the harvest rates on the maturing portion:

 t=t*

   n[t]     =      M[t]            S[t]             P[t]        n[t-1]

|n[1,t,i]|  = |            | |               | |          | |n[1,t-1,i]|
|n[2,t,i]|    |  M[t,i]  0 | | S[t,i]      0 | |  [1-p]  0| |n[2,t-1,i]|
|...     |    |            | | ...           | |          | |...       |
|n[K,t,i]|    |            | |               | |          | |n[K,t-1,i]|
|n[1,t,m]|    |            | |               | |          | |0         |
|...     |    |  0   M[t,m]| |     0   S[t,m]| |  [p]    0| |...       |
|n[K,t,m]|    |            | |               | |          | |0         |

  c[t]    =      H[t]                                n[t]

|c[1,t]|  = |h[1,t,i]         0   h[1,t,m]     0 | |n[1,t,i]|
|c[2,t]|    | 0   h[2,t,i]    0   0   h[2,t,m] 0 | |n[2,t,i]|
|...   |    |...             ..   ..  .. .       | |...     |
|c[K,t]|    | 0   0 .... h[k,t,i] 0 0 ..h[K,t,m] | |n[K,t,i]|
                                                   |n[1,t,m]|
                                                   |..      |
                                                   |n[K,t,m]|

For t>t*, the maturation matrix could become 2 "stacked" identity matrices
=> no more maturation.

Remarks:

1. Catch vector, what's observable doesn't change- assuming that there's no
way to distinguish the maturing from immature CWT recovery.  One thing
overlooked here is the sex issue => need to know sex for CWT recoveries (i=
s
that available?).

2. regarding estimation,
- now need to know or set t*
- need to estimate maturation probability, p, and this would change with age
- have more parameters in M[t] to estimate (and possibly more in S[t] if
make a distinction between mature and immature natural survival rates).

Don't know how estimable these parameters will prove to be.

--------- end Ken Newman email -------------


3. Update on harvesting algorithm code design (Jim Norris; Troy Frever).

Jim reported that he and Troy have been attempting to define harvest
algorithm code objects that reflect real world management policies. The
goal is to encapsulate algorithms as efficiently as possible. The current
plan is to have two classes=97-policy and regulation.

The Policy Class will have:

-- Mathematical algorithm to compute harvests over a group of fisheries, a
continuous time period, and a selection of regions;
-- An optional last time step control function (e.g., to determine if
another iterations is necessary);
-- Data necessary to implement, or knows how to get the data (e.g., from a
cohort or fishery object).

To illustrate how the concept of a policy relates to real world management,
Jim showed a Fishery Management Hierarchy chart that organizes harvesting
units by management agency:

MGT AGENCY             HARVEST UNIT

PICES                     World

PSC                    Canada, USA

ESA &
Indian Treaties      Treaty, Non-Treaty

NPFMC &
PFMC                   AK, WA, OR, CA

State & Regs        Individual Fisheries
Tribal               by Region and Time

Policy objects might include algorithms to reflect management objectives at
different levels of organization (e.g., catch ceilings in some fisheries
set by PSC; harvest rates set by States and Tribes).

A Regulation Class object would contain all the information necessary to
compute harvests for each cohort in a given fishery, region, and time step.
This might include gear regulations (e.g., hook type, net length, mesh
size), fish regulations (e.g., size limit, bag limit, species
restrictions), and mortality algorithms (e.g., legal and incidental=
 catches).

Most of the details are yet to be worked out. The important conclusion at
this stage is that within a time step during computations, the computation
engine will most likely loop through policies, rather than fisheries,
cohorts, or regions.

Jim Scott noted that FRAM has a policy algorithm that requires equal
catches for Treaty and non-Treaty in some terminal areas.


4. Update on migration algorithms report (Jim Norris).

Jim reported that two issues had been resolved from the draft report on
migration algorithms presented at the June meeting. The first is that it is
now agreed that the PM Model does make the tacit assumption that fish
migrate from the donor regions at the same rate. The second is the
migration algorithm used in FRAM.

The main computation engine in FRAM includes subroutines to compute
preterminal harvests, maturation, terminal harvests, and escapements ALL
WITHIN EACH TIMESTEP. For coho FRAM, the timestep is one month. Thus, each
timestep (month) in FRAM can be thought of as being composed three
sub-steps: preterminal, terminal, and spawning.

At the end of the preterminal sub-step, the maturation rate determines how
many fish migrate from the preterminal area into the terminal area (the
immatures remain in the preterminal area). At the end of terminal sub-step,
100% of the preterminal run stay in the preterminal area and 100% of the
surviving terminal run migrate into the spawning area. Thus, there is never
any carryover of fish in the terminal area between time periods.

These assumptions make it simple to estimate from CWT recoveries the
maturation rates for each time step, as follows (working back from final
time step where maturation rate = 100%):

           CWT Recoveries in Esc + Terminal Area Harvest
MatRate = -----------------------------------------------
              Cohort Size After Mixed Immature Harvest

Jim will update the draft report to include the FRAM algorithms.


5. Report on Fisheries Oceanography Symposium at UBC on Jan 8, 1998 (Jim
Norris).

Jim Norris reported that UBC has raised funds to endow a Fisheries
Oceanography chair, and this symposium was the first event to celebrate the
new program. The eight presentations will be written up into a book. Jim
gave a summary of Mike Healey's talk on sockeye salmon migration.

Healey reported on their use of NerkaSim to examine migration behavior of
Fraser River sockeye. This model (NerkaSim) integrates ocean surface
current predictions from the NMFS OSCURS model, sea surface temp data,
zooplankton data, a bioenergetics growth model, and individual decision
rules for fish migration behavior. They divide the life cycle into four
stages and derive decision rules for each stage under the assumption that
each fish has three ecological objectives:

Obj 1. Accumulate surplus energy for growth and maturation (ie find food;
avoid energetically costly behaviors);
Obj 2. Survive (ie avoid predators);
Obj 3. Get in position for the next phase.

Here are their resulting decision rules for each migration phase:

Phase 1. Juvenile nearshore migration up the coast of BC and around the
Gulf of Alaska to about Kodiak Island.
-- Rule 1. Maximize energy accumulation by feeding actively and minimizing
foraging costs.
-- Rule 2. If foraging needs are met, move in a north west direction.
-- Rule 3. Swim away from low salinity water or use other mechanisms to
avoid getting trapped in fjords.

Phase 2. Open ocean adult feeding in Gulf of Alaska.
-- Rule 1. If in a patch of feed, stay there.
-- Rule 2. If not in a patch, find one.
-- Rule 3. Avoid predators.

Note. These Phase 2 rules contradict the previous conventional wisdom that
sockeye in the Gulf of Alaska make one or two "loops" around the Alaskan
gyre. By combining all the separate models (current, temp, food, growth,
migration), they were unable to simulate enough fish growth with the
"looping" hypothesis.

Phase 3. Directional homing migration from open ocean to BC coast.
-- Rule 1. Swim east at an energetically efficient speed.
-- Rule 2. Forage opportunistically along the route.

Phase 4. Neashore coastal migration to mouth of Fraser River.
-- Rule 1. Swim SE in an energetically efficient manner.
-- Rule 2. Avoid low salinity water that does not smell like home.
-- Rule 3. Find low salinity water that smells like home.
-- Rule 4. Avoid high temperatures.

Their major conlcusions were:
-- Need to look at migration in an ecological context. That is, don't treat
migration as simply a behavior pattern.
-- Objective modeling reveals how much complexity the ocean dynamics can
introduce into migration behavior.


6. Report of further model comparisons (Jim Norris).

Jim briefly outlined a procedure he planned to use to compare FRAM and PM,
and eventually the SSM. In the near future he will post a more detailed
description of the procedure on the web discussion page. At this time he
agreed to have the FRAM and PM comparisons completed by the next meeting
(March 16) and that he would attempt to include the SSM in the comparisons
by the May meeting.


7.  Report on PM Model (Peter Lawson).

Pete provided background information on the development of the PM Model.
This model was created to address two fundamental problems in "single pool"
models, such as FRAM:

-- Fishing mortalities are adjusted in linear proportion to expected effort
(this approach can substantially underestimate non-catch mortality in
selective fisheries, especially when local harvest rates on marked fish are
relatively high);

-- Fish saved in one geographic region (say Washington) are assumed to be
available for harvest in another geographic region (say California).

Pete described the algorithms he and Dave Sampson developed to address the
first problem and provided reprints of their paper:

Lawson, P. W., and D. B. Sampson. 1996. Gear-related mortality in selective
fisheries for ocean salmon. N. Amer. J. Fish. Mgt. 16(3): 512-520.

Their approach specifically accounts for factors such as drop-off rate and
mortality, mark recognition rate, and release mortality rate.


8. Discussion of data issues for SSM Model (Group).

The last agenda item concerned getting acceptable data to fit to Ken
Newman's State Space Model (SSM). [For those new to this committee, our web
site -- /harvest/discussion/ -- has
a few papers on the SSM and detailed minutes from previous meetings.] The
general data in question are CWT recoveries for coho salmon stocks and
fishing effort data, all collated on a weekly basis.

Jim Norris briefly introduced the subject and a general discussion
followed. From the discussion the current status of this issue can be
summarized in the following statements.

-- Over the past year, Rich Comstock and Carrie Cook-Tabor have provided
several datasets to Ken Newman that they feel correctly respond to his data
requests;

-- Ken has responded with further requests for data clarification or for
additional data manipulation (whether or not certain data are or can be
summarized on a weekly basis seems to be an important issue);

-- Rich, Carrie, and Ken do not have time to make adjustments to the
datasets already prepared and/or to create new datasets that Ken will find
acceptable;

-- Judy Cress (CRiSP group) and Cara Campbell (NMFS) have been asked to
provide some assistance in getting acceptable data, but neither feels that
"starting over" is the best course of action.

The general discussion raised the following concerns about gathering data
to fit to the SSM, and about the utility of the SSM in general.

-- The Model Committee does not have a well-defined, agreed-upon procedure
for gathering and collating the data to be fit to the SSM;

-- The project is falling behind schedule and will not be able to be used
by Sep 98 to start planning the 1999 season. In particular, Jim Scott's
concept was that Ken would have SSM techniques developed by Sep 98 so other
scientists (States, Tribes, NMFS) could then apply the techniques to many
more stocks;

-- The problem of estimating migration parameters (with the SSM techniques)
for stocks following a non-linear migration path (e.g., coho straying and
wandering inside Puget Sound) has not been resolved;

-- Does the SSM technique work any better than the PM or FRAM methods?

-- Can the CWT and fishing effort data be accurately collated on a weekly
basis for all regions?

-- If CWT and effort data cannot be accurately collated on a weekly time
basis for all regions, can the SSM still be fit to data collated over a
longer time frame (bi-weekly? monthly?) and still provide migration
parameter estimates with acceptable precision?

-- What effect, if any, does collating CWT data over time strata that are
not consistent with catch/area sampling data have on parameter estimates?

Just prior to the Jan 20 meeting Ken Newman sent email to Tom Wainwright in
which he outlined six possible MS Thesis topics (related to the SSM) he
recently discussed with a new graduate student. Although this information
was not presented at the Jan 20 meeting, it is included here because of its
pertinence to this subject.

------ Excerpt from Ken Newman email dated Jan 19, 1998 --------

I just sat down with my new grad student working on this project to look at
potential thesis topics for him and found at least 6:

1. combining multiple years of release/recovery information via Empirical
Bayes and SSM- namely how to best simulate a potential fisheries mgmt plan

2. incorporating disparate time scales of effort information, and
non-disjoint spatial regions of recoveries- namely how to best use effort
data from commercial (wkly) and sport fisheries (mthly) and their
recoveries (from catch reporting regions that often overlap)

3. modeling migration on a non-linear spatial framework- namely adding the
inside spurs for Puget Sound, Georgia Straits, etc.

4. estimating maturation rates (should be sex specific, too) for chinook
and determining how separable such estimates are from survival and movement
rates

5. linking various parameter estimates (initial spatial distribution,
survival, movement, maturation) to environmental conditions, origin of
release, `nature' of release (whatever might have been done at the
hatchery, say)

6. parameter estimation for a more realistic non-normal SSM (the problem
I've been struggling with for 4 months now)

----- end Ken Newman email -------------

The committee agreed to form a SSM Data Group to convene ASAP to develop a
strategy for resolving the SSM data issues. Proposed members of the SSM
Data Group are:

Jim Scott
Ken Newman
Jim Norris
Robert Kope
Cara Campbell
Judy Cress
Marianna Alexandersdottir
Carrie Cook-Tabor

Jim Norris recommended that the SSM Data Group focus on gathering data for
a single Puget Sound coho stock that is known to have (1) a large number of
CWT recoveries, and (2) considerable straying or wandering inside Puget
Sound. He suggested that attempting to apply the SSM to such a stock should
address many of the data specific concerns raised at yesterday's meeting.


9. Next meeting.

The next meeting will be Monday March 16 at NMFS Montlake at 9:00 am.

20. Maturation code update by Jim Norris, 12/10/97

To: NMFS Model Committee

From: Jim Norris, Troy Frever

Subject: Maturation code update

First, I want to thank members of the Model Committee for pointing out our failure to properly code the maturation process. This was a significant oversight on our part. Fortunately, at this stage in development it is fairly easily fixed. Below is a summary of our proposed changes.

The biological process.

Our understanding is that for almost all salmon stocks, individuals within a cohort may or may not mature and return in a given year. At the individual fish level, the decision to return is determined by both genetic factors and environmental factors. On the genetic side, each salmon species has a fairly consistent maturation process (e.g., chinook mature over several years; coho tend to mature all in same year). On the environmental side, individual fitness (e.g., general body chemistry) during the spring or early summer may determine which individuals actually decide to migrate home, and when.

Mathematical Modeling Problem.

The above biological realities imply that at any given time and region a cohort may be divided into two distinct components (immature and mature) that will have different migration patterns. The modeling question is whether or not it is necessary to model each component of the cohort separately.

If we assume no fishing mortality, ignore mature and immature status, divide time and space into discrete units, and track the movement of each individual fish, at any time step we could determine the fraction of the fish that move from each cell to each other cell. From these data we could define transition matrices for each time step that fully describe the movement of the entire cohort, regardless of maturation status. In this perspective (no fishing), it does not seem necessary to track mature and immature fish separately.

However, the fishing process will alter the relative composition of immature/mature fish within a time/area cell. Since these two components have different migration patterns, the migration pattern for the combined cohort (immature and mature) can no longer be expressed by a single transition matrix.

New Main Engine.

We will add a new maturation process to the main engine just before the migration process. Thus, the main loop in the computation engine will look something like this:

----- revised computation engine ------

  YearInit(year);

  TimeStepIter time(clock);
  while (++time) {
      naturalMortalityManager.takeNaturalMortality(clock);
      FisheryManager.takeFishingMortality(clock);
      spawningManager.spawnCohorts(clock);
NEW ==>    maturationManager.maturateCohorts(clock);
      migrationManager.migrateCohorts(clock);
  }

  for (i = 0; i < Stocks.num(); ++i) {
      Stocks[i].year_wrapup();
  }

----------- end revised engine -------

It may seem odd to place the maturation process after the spawning process. We do this because for modeling purposes the maturation process is most closely linked to the migration process rather than the spawning process. I don't think the order matters much because the spawning process will only be activated in certain time steps and/or regions. We are still evaluating this ordering and haven't made a firm decision yet. Any comments?

Cohort Objects and Data Tracking.

At the meeting we discussed two possible approaches to coding the maturation process. One was to treat the immature and mature components as separate cohorts; the other was to keep separate abundance vectors for immature and mature components within the same cohort.

We note that maturation is the only model process that transfers fish from one abundance vector (immature) to another (mature) within the same time/area cell. We also note that the immature and mature components share many characteristics (e.g., species, stock, brood year, age, mark status, tag status). Despite these  considerations that suggest keeping immature and mature vectors within the same cohort, we decided to use the first alternative and treat immature and mature fish as separate cohorts. Our reasons are:

1. From a biological perspective, the immature and mature components are biologically separate cohorts, in the sense that they have significantly different demographic characteristics, mainly a different migration pattern. They also may have a different growth rate or size distribution.

2. From a coding perspective, tracking two components within the same cohort would be messy (separate abundance vectors, separate transition matrices, maybe separate growth functions, etc.) and would violate our basic concept of what a cohort is.

At config time, all the cohorts can still be created. Mature/Immature will be a part of the CohortID. The abundance vectors for mature cohorts will be initialized at zero, and will be filled in during the maturation process at appropriate time steps.

This approach will require twice the number of cohorts as previously planned, and twice the data storage requirements (assuming we need to track data for immature and mature cohorts separately).


Relationship to State Space Model.

An important consideration is how the maturation process will be estimated. As a first step, I've added the maturation process to the SSM. I have no idea whether or not the parameters in the new formulation can be estimated, but the exercise helps clarify some of the modeling questions. I'm hoping that these ideas will spark further ideas from those more familiar with the estimation procedures.

Recall that the SSM formulation (in matrix notation) is:

    n(t) = M(t) S(t) n(t-1)

    c(t) = H(t) n(t)

This formulaton assumes that the cohort abundance being tracked is the mature cohort. Or stated another way, this model assumes that the maturation process occured prior to time t = 0. This may be a reasonable assumption for coho, but not for chinook.

Now consider tracking the mature and immature components separately. Let

    n'(t) = immature abundance vector at time t;

    n"(t) = mature abundance vector at time t;

    M'(t) = transition matrix for immature fish;

    M"(t) = transition matrix for mature fish;

    m(t)  = maturation matrix at time t,

where m(t) is a diagonal matrix with diagonal elements equal to the maturation rates by region for time step t and all other elements = 0. In most cases I suspect that at any time t, the elements of m(t) will be assumed identical, implying that the maturation process is the same over all regions.

To simplify, assume that immature fish do not migrate during the modeling period, and thus M'(t) = I (the identity matrix). If we further assume that mature and immature fish have the same survival and harvest matrices (in practical terms this probably means assuming they have the same size, vulnerability to gear, feeding behavior, etc), we can write the SSM as:

    n'(t) = [I - m(t)] S(t) n'(t-1)

    n"(t) = M"(t) [S(t) n"(t) + m(t) S(t) n'(t)]

    c(t)  = H(t) [n'(t) + n"(t)]
   
In considering how to model the maturation process, I see three key parameters: (1) what is the total maturation rate for the given year--15% ? 75% ?; (2) when is the peak maturation date--the date on which the most immature fish become mature fish--Julian day 156?, 275?; and (3) over what time range does the maturation process continue--two weeks?, two months?.

If one assumes that the maturation process is independent of region (ie for any given time t, the diagonal elements of m(t) are identical) and fixes one or two of the parameters mentioned above (based on other biological information), it seems that the maturation process could be modeled with only one or two additional parameters to estimate.

Is this feasible?

Other ideas?

19. Nov. 6, 1997 minutes by Jim Norris, 12/10/97

TO: NMFS Salmon Model Committee and interested parties.

SUBJECT: Minutes of the November 6, 1997 meeting

FROM: Jim Norris

Contents:

1.  Attendance list.
2.  Update on overall code design and development (Jim Norris).
3.  Update on code details (Troy Frever).
4.  Future code development (Jim Norris).
5.  Update on State Space Model development and application (Ken Newman).
6.  Model comparisons update (Jim Norris).
7.  Miscellaneous items.

1.  Attendance.

Robert Kope (NMFS)
Jim Scott (NMFS)
Tom Wainwright (NMFS- NWFSC- Newport)
Steve Lindley (NMFS - SW)
Pete Lawson (NMFS)
Ken Newman (UI)
Jim Norris (UW)
Troy Frever (UW)
Jim Anderson (UW)
Christine Muongchanh (UW)
Jennifer Gutmann  (NWIFC)
Carrie Cook-Tabor (USFWS)
Norma Jean Sands (ADFG)
Marianna Alexandersdottir (WDFW)
Din Chen (CDFO)
Brent Hargreaves (CDFO)
Marianne McClure (CRITFC)


2.  Update on overall code design and development (Jim Norris).

2.a. Explicit timesteps and regions.

Jim stated that the original code objective is still valid: create a computing engine that runs in chronological order and is based on physical and biological processes. Jim then presented overheads showing the old Crisp Harvest main engine and the new modified version that incorporates explicit timesteps, regions, and a migration process. The annual loop in the old engine was:

for (int year = 0; year < Chronographer->nyears(); ++year) {

  YearInit(year);

    for (int i = 0; i < Stocks.num(); ++i) {
    Stocks[i].ocean_mortality();
    }

    FisheryManager.take_preterm_harvests();

    for (i = 0; i < Stocks.num(); ++i)
      Stocks[i].maturate();

    FisheryManager.take_terminal_harvests();
    FisheryManager.take_river_harvests();

    for (i = 0; i < Stocks.num(); ++i) {
      Stocks[i].update_cohort_from_harvests();
      Stocks[i].set_escapement();
      Stocks[i].apply_idls(); // apply idls to escapements
      double age1 = Stocks[i].spawn();
      Stocks[i].age_cohorts(age1);
      Stocks[i].year_wrapup();
    }

    Harvests.traverse(&Harvest::year_wrapup);

    for (i=0; i < Fisheries.num(); ++i)
      Fisheries[i].year_wrapup();
}

Jim noted that in the old engine the important processes (natural mortality, preterminal fishing mortality, maturation, terminal fishing mortality, spawning, and aging) all occur in a fixed order within a given year. In the new engine these processes may occur within each timestep within a given year, and the maturation process is replaced by a migration process. The annual loop in the new engine is:

for (int year = 0; year < clock.nYears(); ++year) {
     int i;

     YearInit(year);

     TimeStepIter time(clock);
     while (++time) {

      naturalMortalityManager.takeNaturalMortality(clock);
      FisheryManager.takeFishingMortality(clock);
      spawningManager.spawnCohorts(clock);
      migrationManager.migrateCohorts(clock);
     }

     for (i = 0; i < Stocks.num(); ++i) {
        Stocks[i].year_wrapup();
     }

     Harvests.traverse(&Harvest::year_wrapup);

     for (i=0; i < Fisheries.num(); ++i)
        Fisheries[i].year_wrapup();
}

In the new engine there is a timestep loop within each year, and during each timestep the natural mortality, fishing mortality, spawning, and migration processes all can occur. Jim reported that this new configuration is in place and validates with the previous code (i.e., produces the same results when given the same input data).

2.b. Cohort tracking.

The new code tracks cohorts differently. The old code tracked synthetic cohorts (ages one through five for each stock; all ages from different brood years). The new code tracks true cohorts (a single group of fish from the same stock and brood year with identical demographic characteristics). The objective is to allow greater flexibility in defining cohorts. Under the new method, the age of the cohort is updated at each year and timestep.

Each cohort has the following objects:

CohortID. This specifies the cohort characteristics: species, stock, brood year, production type (hatchery or wild), tag status, mark status, sex, growth group, genetic group, etc.

CurrentAge. Updated at each timestep to keep track of the cohort age.

AbundanceVector. Keeps track of cohort working abundance by region. Within a timestep, process modify these abundances.

EntryAbundanceMatrix. Records the working abundance by region at the start of each timestep. This data is needed for some of the process algorithms.

NaturalMortalityMatrix. Records natural mortalities by region during each timestep. This data is needed for some of the process algorithms.

Life history data for each cohort will be stored outside the cohort object in a data manager. If storage space permits, the following data will be stored for each year, time step, and region:

-- starting abundance
-- average individual fish length)
-- natural mortality (number of fish)
-- legal, shaker, and cnr mortalities by fishery
-- immigration into the region (number of fish)
-- emigration out of the region (number of fish)
-- ending abundance.

Jim estimated that for a FRAM (coho) configuration the total storage space required would be 3.95 MB. For a PSC Chinook Model configuration (running for 25 years) the total storage space required would be 5.02 MB.

2.c. Migration Process.

For each cohort, the migration process works in the following manner. The MigrationManager receives the CurrentYear and TimeStep from the Chronograph and receives the CohortID and CurrentAge from the cohort. This information is used to extract the correct TransitionMatrix (TM) from the TM database, which is maintained by the MigrationManager. The TM is then multiplied by the AbundanceVector, and the result is inserted into the appropriate column of the EntryAbundanceMatrix.

Several committee members pointed out that this code design will not properly model the maturation process. The basic problem is that during some timesteps each cohort may divide into two distinct cohorts (immature and mature) each with different migration patterns. The fishing process can alter the relative composition of immature and mature components, thus altering the migration pattern of the total cohort. After a lengthy discussion, Jim agreed that a major revision of the design was required. Two alternatives were discussed: (1) create separate cohort objects for the immature and mature components; or (2) create two AbundanceVectors within a cohort, one each for the immature and mature components. [Alternative one has been selected. See the discussion page for a complete description of the changes.]


3.  Update on code details (Troy Frever).

At the August meeting, the committee recommended testing the Generic Parameter System (introduced at the August meeting; see minutes for a description of the system) to ensure that it wouldn't be too slow. Troy presented results from these tests. In general, a completely flexible Generic Parameter System did prove to be unacceptably slow.

As a compromise solution, Troy described a modified Generic Parameter System (called a Generic Array System) that requires some knowledge of parameter dimensions at run time.  For example, a configuration file may specify that during this model run a parameter will vary by time and age. The data for that parameter are then stored in a two dimensional array (time x age). This compromise system is a bit slower than a system that has the array dimensions hard-coded, but is faster than the completely flexible Generic Parameter System.

Currently, the transition matrix data are stored using this modified system. Troy recommends that we not worry too much about computation speed at present.  He advises that standard software development practice is to fully develop code first, then create a detailed profile of the execution time using diagnostic tools designed for that purpose.  Problem areas may then be addressed.

Troy briefly described the concept of a process table. Functions (processes) performing similar operations are placed in a table and selected at runtime based on the state of particular control variables.  A specific example is Natural Mortality, where various different natural mortality algorithms are selected based upon the current timestep.  This approach generalizes in object-oriented programming to the selection of a process object which also carries process-specific configurable data.  This approach allows both processes and data to be selected at runtime from a configuration file. The Generic Array system (described above) is a suitable implementation for this design, and is currently in use in several parts of the model.

Troy concluded by summarizing the status of the model. The migration process is in place, although it will apparently have to be modified to accommodate a new maturation process, as discussed earlier. The next big step will be to add the fishing mortalities. These tasks are estimated to be completed by the end of the year, depending on Troy's time.

There was a question of "why do we need `time' with each abundance?" The answer was: (1) to be able to do report summary data easily; and (2) those data are required for fishing algorithms that span multiple time steps within a year. Some fishing mortality algorithms in the PSC Chinook Model require iteration across the preterminal and terminal time steps (i.e., the ocean net fisheries are assumed to harvest age 2 and 3 fish as immatures and age 4 and 5 fish as matures).

Troy and Jim Norris would like to remove the requirement for the code to be flexible enough to iterate over multiple time steps, as was done for the PSC Selective Fishery Model. Robert Kope agreed that it seemed to be an unreasonable constraint.

Jim Scott, however, questioned how the current multiple timestep analysis would be incorporated.  Jim Norris suggested three alternatives. The first is using the PSC Selective Fishery Model approach. Here a ceilinged fishery that spans multiple time steps would use an algorithm that simply subtracts the catch in each time step from the overall ceiling until the ceiling is reached. This method would not be able to exactly duplicate the existing PSC Chinook Model algorithm.

A second alternative is to allow for iterations over all the time steps within a single year. The basic idea would be to do the computations for the whole year, compare the results with some objective function that spanned multiple time steps (e.g., catch allocation objectives), adjust appropriate fishery control variables, repeat calculations for the entire year, etc. until the objective function was met. Jim stated that he and Troy plan on using this second alternative.

A third alternative would be to develop some type of fishery management object that would allow for iterations over selected time steps within a year. The main advantage of this approach is faster overall computation time. Jim stated that this third alternative proved to be too complicated, and felt that it should be explored only if the second alternative proved to be unacceptably slow.


4. Future code development (Jim Norris).

Jim stated that the next major code development will be to make the existing fishing mortality algorithms generic for variable timesteps and locations. Once these fishing algorithms are in place, Jim will attempt to duplicate the existing FRAM coho configuration by configuring the maturation rates as Maturation and Migration processes. The next logical step will be to expand the number of areas and time steps and introduce new Migration data estimated from Ken Newman's State Space Model work.


5.  State Space Model update (Ken Newman).

Ken presented an overview of the State Space Model (SSM) and compared it to the code development presented earlier by Jim and Troy. The "working abundance" vector for each cohort in the new code corresponds to the SSM abundance vector times the survival matrix (which includes both fishing and natural mortality). The goal of the SSM is to estimate the parameters for the migration matrices, including the degree of uncertainty, and the parameters for the mortality components.

Ken described the hierarchical nature of the SSM he is now using. At the top level is a set of hyperdistributions which are probability density functions (PDFs) that generate the initial distribution, movement, and mortality parameters for a given stock in a given year. Namely, to run the SSM for a given stock and year, one randomly draws a set of values from which one can sequentially generate abundances and catches over time and space using the SSM (with or without natural variation included).  

In its present form the hyper-distribution has 3 fixed and 5 free parameters that must be estimated. Ken showed results for the 5 free parameters; initial survival, initial location, US fishing survival, Canadian fishing survival, and movement.

At the August meeting Ken reported that his Empirical Bayes (EB) estimates of initial survival were below the minimum estimated from CWT cohort analysis, and Gary Morishima questioned those results. Ken reported that after the August meeting he found a code error in his program. Once the code was corrected, the results were more consistent with those derived from cohort analysis.

Ken received 3 more years of Humptulips CWT data, so he now has data for 1983-88. He still doesn't have all the effort data, but he's getting closer.

Ken has been working on a new SSM formulation that does not assume normally distributed error terms. The problem with normally distributed errors is that it is possible to get negative catches and abundances. He has been experimenting with an alternative non-normal SSM, where the initial distribution is

(1) multinomial: n ~ M(R*s, P1, P2, ...)

where R*s(i) is the initial abundance at the start of the fishing season (Releases times survival rate), and Pi = probability of residing in region i.

Then the state equation of abundances evolves according to a lognormal distribution,

(2) lognormal: n(t)|n(t-1) ~ lognormal(ln(M*S*n(t-1)), Sigma)

where M and S are the migration and survival matrices, respectively, and n(t) is the abundance vector at time t, and Sigma is the covariance matrix.

Finally the observation process (catch) is linked to the abundance by a Poisson PDF:

(3) Poisson: c(t)|n(t) ~ Poisson(H*n(t))

where c(t) is catch vector at time t, n(t) is the abundance vector at time t, and H is the harvest matrix.

The main problem in implementing these non-normal error structures is that his program is too inefficient--it requires too much computation time. This remains an unsolved problem for SSMs.


6.  Model comparison update (Jim Norris).

Jim Norris gave an update on further analysis of the PM model. At the June 97 meeting Jim gave a report showing a matrix notation for the PM model, which does not have a migration algorithm per se. Based on this notation, Jim concluded that: "… the PM model makes the tacit assumption that for a given cohort and time step, fish migrate from all donor regions at the same rate."  Rich Comstock noted (at the June meeting) that this assumption may not be correct, because the transition matrix in the PM model is not directly analogous to the migration matrices of the other models. Jim agreed to delete the statement from the report and requested suggestions for a more intuitive description of the PM matrix terms.

Jim described his most recent analysis approach for investigating the question: What is the proper biological interpretation of the Transition Matrix elements for the PM Model? The approach consisted of the following steps:

1. Define a mathematical model of a biological system having 4 regions, 4 fisheries, one stock composed of marked and unmarked components, and Transition Matrices (TMs) such that fish migrate from region 1 to region 4 over time.

2. Define a base period harvesting policy and assume the manager has perfect knowledge of the resulting base period data (i.e., catches and harvest rates).

3. Define new harvesting policies by scaling base period harvest rates (e.g., for the first two fisheries reduce the harvest rate on the unmarked stock component to 10% of the base period rate) and compare "true" results based on the biological model defined in step 1 with predicted results using the PM methods (using correct base period catches and harvest rates).

The results indicated that when there were base period catches in all fisheries and timesteps, the PM model predicted catch for the unmarked stock component was 330 compared to the "true" value of 322. When only half of the fishery/timestep cells had base period catches, the PM model predicted catch was 295 compared to the "true" value of 272.

Jim's conclusion was that the PM model is a biased estimator, and the degree of bias is affected by the distribution of base catches in time and space.

A lengthy discussion followed. Several committee members pointed out that the results should not be too surprising, since all models (including FRAM) are biased to some degree. Jim noted that his frustration in evaluating the PM model performance is that the current documentation does not describe in mathematical terms an underlying model of a biological system. Without an underlying model, it is difficult to determine the biological meaning of intermediate parameters estimated by the PM algorithm (e.g., the adjustment factors used to scale the base period abundances when estimating catches under new harvesting policies).

Jim stated that he plans to add the FRAM model algorithms to the analysis approach to determine relative biases of the PM and FRAM models.


7.  Miscellaneous Items.

7.1. Anchorage Conference.

Jim Anderson reported that the conference held in Anchorage was excellent. Of interest to the NMFS model committee was a talk by Jim Ianelli on risk analysis and conversations with Dave Hankin on relating ocean conditions and survival. Jim also talked with Dave Fournier about his AD Model Builder software--it may have some application for our work. Next year's meeting will be held in November in Anchorage.

7.2. Santa Rosa Workshop.

Robert Kope reported that this workshop was organized by the NMFS Southwest Regional Office to help explain to California fishermen (mostly trollers) and non-technical managers (e.g., PFMC members and staff) how several computer models (e.g., Klamath Ocean Harvest Model, Sacramento Winter Chinook Model, FRAM) are used to evaluate salmon management alternatives. At the end of the meeting Jim Norris gave a presentation on the NMFS model work. The workshop was successful in raising the level of understanding by everyone.

7.3. NerkaSim.

Jim Norris showed some slides on the NerkaSim model developed at UBC. This is a user-friendly model (for the Windows 95 platform) that incorporates individual based models (IBMs) for fish behavior (movement) and bioenergetics (growth and survival) with the NMFS Ocean Surface Currents (OSCURS) model and oceanographic data (sea surface temperature and zooplankton abundance). The model is described in the October issue of Fisheries and can be found on the web at:

http://www.eos.ubc.ca/salmon/nerkasim/

7.4. Development Schedule.

Troy reported that the bulk of the design work is done and we now are mostly coding. Building the GUI will be time consuming. He thinks that sometime in the middle of next year we will have a usable model with a crude GUI.

The meeting adjourned at 3:20 p.m. and the next meeting is scheduled for Tuesday, January 20th, at NMFS Montlake.

18. NerkaSim Model by Ken Newman, 10/20/97

There's an article in the October issue of Fisheries 
discussing NerkaSim, a salmon simulation tool, put together 
by Peter Rand and others.

This may be of interest to the HAM group, not only for its 
frontend (the graphic displays in particular),but also its 
inner workings.  It includes migration (for sockeye currently)
and attempts to incorporate info on the physical environment, e.g.
Sea Surface Temperature, and predators.  For physical 
environment they have somehow utilized current information
provided by the OSCURS model of Jim Ingraham, that Jim
Anderson has talked about several times.

I have no idea how data-driven, or data-based, the model is.
The strength of the state-space model is the sample-based
parameter estimates, not a lot of best guesses.  I do think
that NerkaSim has many of the "right" components, individual
based migration, survival, ocean environment information,
and predation.  There is no explicit link between harvest and
mortality and is not designed for dealing with various
management actions (but then again the current state-space
model is limited to effort manipulation only).

Ken Newman

17. More "sensible" Empirical Bayes estimates from Ken Newman by Admin, 10/02/97

  At the last meeting Gary Morishima expressed concern over the 
empirical Bayes estimates of initial survival rates (from time
of release to beginning of harvest season) being less than
"crude cohort analysis" estimates, namely total recoveries divided
by release size.

  I thought sampling variation could explain this...but on closer
examination of the programs I found a bug in the evaluation of
the Beta (hyper-)distribution used for this parameter.  I fixed
the bug and now get the following:

Year    Ordinary  Empirical   Cohort
         MLE       Bayes*      Estimate
1985     1.63      1.66        1.25
1986     4.14      4.14        3.46
1987     1.46      1.39        1.32

*Based on 1,000 Markov chain Monte Carlo simulations.  The EB estimates 
may change slightly when I do the "full" Monte Carlo simulation of 10,000.

 I expect the MLEs/EB estimates to generally be a bit higher than
cohort estimates because the latter ignores drop-off mortality
and the former "could" allow for it.

 Incidentally the differences between ordinary MLEs and
EB estimates remain relatively large for the other 4 parameters.

 Thanks, Gary!  That question had been nagging me for quite a while!

Ken

16. Update since last meeting from Ken Newman by Admin, 10/02/97

Hi, All

 Just want to stay in touch, since we're at the midpoint between 
 meetings.  Currently working on 2 things-

 1. developing non-normal state-space model

    - currently trying a (multivariate) lognormal distribution
    for the "states" (the actual abundances)  and a (multivariate)
    Poisson dist'n for the "observations" (the expanded CWT recoveries)

    - I just now have a working pgm going, though it's not very efficient
     at all and needs serious tuning...but some _very_ preliminary results 
     suggest that changing the underlying probability distributions
     may indeed have a dramatic effect on the parameter estimates (e.g.,
     initial survival rates, initial spatial location, q-US...).  I hope
     this does _not_ turn out to be the case- I'd like things to be
     somewhat robust to underlying "reasonable" probability distn's. 
     The algorithm for the lognormal-Poisson is a much trickier than 
     the old normal-normal case (which uses the Kalman filter)- so there's
     a good chance that bugs remain in the pgm.

 2. applying the normal-normal SSM to more data sets and re-computing
    Empirical Bayes estimates of the parameters.

    - Rich and Carrie sent me today what they hope are the "right"
    CWT recovery and fishing effort data sets for 6 recovery years
    (1986-1991).  If the data is correct it should be somewhat mechanical
    to run these sets through the model.

Ken

15. CRiSP Harvest v3.0.6 is now online by Admin, 9/15/97

The latest version of CRiSP Harvest (version 3.0.6) is now available
for downloading at:

/crisp/crisp2pc.html

Most of the many previous bugs have been fixed and the chapters of the
manual (Introduction, Users Guide, Lessons, and Theory) are available
on-line. Be sure to check the Read Me section of Help for last minute
notices. The Lessons Chapter contains set-by-step instructions for
learning about some of the basic model behaviors.

If you encounter any bugs, please report them on our on-line bug report
form at:

/crisp/ch_bug.html

If you have any other questions or comments, please feel free to
contact me directly.

Sincerely,

Jim Norris

14. 22 August 1997 Meeting Minutes by Admin, 9/08/97

TO: NMFS Salmon Model Review Committee and interested parties.

SUBJECT:  Minutes of the August 22, 1997 meeting of the NMFS Salmon Model
review committee

FROM: Jim Norris and Robert Kope


Contents:

1.  Attendant list.
2.  Update on code development (Jim Norris).
3.  Update on Migration Algorithm Report (Jim Norris).
4.  Report on FRAM development for 1998 season (Carrie Cook-Tabor, Jennifer
    Gutmann).
5.  Update on State Space Model development, parameter estimates, and
    utilization (Ken Newman).


1.  Attendance:

Robert Kope (NMFS)
Jim Scott (NMFS)
Ken Newman (UI)
Jim Norris (UW)
Dell Simmons (USFWS)
Jennifer Gutmann  (UWIFC)
Din Chen (CDFO)
Cara Campbell (NMFS)
Jim Anderson (UW)
Christine Muongchanh (UW)
Carrie Cook-Tabor (USFWS)
Norma Jean Sands (ADFG)
Gary Morishima (QINC) 
David Caccia (UW)

Peter Lawson and Tom Wainwright joined the meeting via speaker phone from
Newport, OR.


2.  Update on code development (Jim Norris):

Jim Norris began by describing the fundamental parameter specification
dilemma: How to maintain complete parameter specification flexibility
without restricting the process equations? For example, most existing
models assume the natural mortality rate is age specific, but is constant
for all years and stocks. In this case, the parameter is specified by a
single age index--NatMortRt(age). It is conceivable that future models also
may want the natural mortality rate to vary by year, time step, region, and
cohort. To get this type of flexibility would require a five dimensional
array -- NatMortRt(age,year,step,region,cohort). It is probably impossible to
have arrays this big, so some sort of variable array specifications will be
required.

Jim gave a summary of the potential specification levels for different
processes:

Non-salmon processes (e.g., physical environment and non-salmon biological
environments) may have parameters that vary by:

  -- year
  -- time step
  -- region

Salmon processes (e.g., natural mortality, growth, spawning, migration) may
have parameters that vary by:

  -- year
  -- time step
  -- region
  -- cohort

Fishing processes (e.g., legal and incidental fishing mortalities) may have
parameters that vary by:

  -- year
  -- time step
  -- region
  -- cohort
  -- fishery

To fully specify all parameters with maximum flexibility using a
traditional multidimensional array approach would require a huge number of
parameters. Jim distributed a summary of a Generic Parameter System that
Troy developed that allows complete flexibility in specifying parameters (a
copy of the description is available at
/harvest/gen_param.html). The goal of the
system is to provide sparse arrays of arbitrary dimension and size at run
time, without requiring changes in the underlying source code.

The primary component of the system is a GenericParamArray object that
holds pointers to either another GenericParamArray or a GenericParamFinal
value. This provides for the arbitrary number of dimensions for each
parameter.

One requirement of this system is to have some dimensions specified at the
model level:

  -- Number of geographic areas
  -- Number of species
  -- Number of stocks for each species
  -- Number of age classes for each species

The following dimensions can be specified at the year level:

  -- Number of time steps for each year
  -- Number of cohorts for each stock for each year
  -- Number of fisheries for each year

Jim Scott questioned why the number age classes was under the model level,
and not the stock level. Jim explained it was necessary to specify the
number of age classes at the highest possible level to make the
GenericParamArrays work properly. Stocks that typically had fewer age
classes (e.g, 4 instead of 6) would simply have zero abundance in the
higher age classes. For example, the migration matrices for those stocks
would migrate all the age 4 fish into the spawning grounds.

Gary noted that the number of years was missing from the model level. Jim
agreed.

Carrie pointed out that fixing the number of geographical areas at the
model level could cause a problem because the Proportional Migration (PM)
model sometimes has different areas in different years and time steps. She
noted that the PM model uses CWT recoveries from statistical areas to
determine the "donor" areas, and these can change from time step to time
step. Jim N suggested that this may not be a problem if the areas are
specified at the highest possible resolution in the model, and then
defining migration matrices in terms of groups of areas. Jim N said he
would discuss this issue with Troy.

Gary noted that the generic system is to set up pointers to different
parameters. He also questioned about the data types and was concerned that
small typos could mess up the configuration. He stressed the need to make
sure that pointers point to the right location. Jim Norris will pass on
this concern to Troy.

Robert commented that this configuration will allow us to set up separate
data types. The configuration file will be set up without having to access
the source code.

Jim Norris added that the Generic Parameter System was not implemented yet
and might not be the final model.

Jim Scott encouraged that we try this configuration to see how it works.
There was general concern about the added access time the generic system
might require. Jim said he and Troy had discussed this and had concluded it
would not be significantly slower. The group recommended that the system be
tested against a full array system. Jim agreed to do the test. 


3.  Update on migration algorithm report (Jim Norris):

Jim Norris stated that many people had trouble with the migration matrix
notation in the State Space Model (SSM). Currently, the rows index the "to"
regions and the columns index the "from" regions. Thus, each element of the
matrix, say a(i,j), represents the fraction moving from region j to region
i. Most people would like each element to represent the fraction moving
from region i to region j. Jim asked if anyone would object to changing the
definition of the matrix. This would result in the SSM formula using the
transpose of the matrix. No strong objections were voiced.

Jim asked if anyone had come up with a better intuitive definition for the
migration matrix elements when applied to the Proportional Migration model.
No one had given this any thought.

Jim would like to add the "Box Car" model to the report. Robert thought it
was a good idea and agreed to give Jim some references on it.


4.  Report on FRAM development for 1998 season (Carrie Cook-Tabor, Jennifer
Gutmann).

Jennifer briefly reported on the July 15 meeting on FRAM development. Jim
Packer is doing most of the coding to allow the FRAM model to evaluate
selective fisheries. This modification is required by the letter of
agreement between the Tribes and Washington Department of Fish and Wildlife
regarding mass marking. Minutes of the July 15 meeting were distributed
(these are available at /harvest/fram.html).

Basically, the FRAM model is being modified to double the number of stocks,
thus allowing each stock to have marked and unmarked components.

Gary asked how the new FRAM model would be evaluated and tested. He
suggested using the same datasets used by the PSC Selective Fishery Model a
few years ago to see how different the results are from the two models. It
was left uncertain who, when, or how the new FRAM model would be tested. 


5. Update on State Space Model development, parameter estimates, and
utilization (Ken Newman).

Ken began by outlining general project objectives:

-- Protection/Adequate Escapements
-- Set Catches
-- Season "stability"

Management actions to meet these objectives include time/area closures,
quotas, ceilings, selective fisheries, etc.

Tools for evaluating the actions must have the following characteristics:

-- Reflective of actions
-- Structurally meaningful (causal, biological)
-- Must reflect uncertainties

Ken showed some examples of how the State Space Model (SSM) can be used. He
showed results of Monte Carlo simulations (box plots of total catches from
100 games). The sources of variation in these simulations were: annual and
stock to stock variation in SSM parameters (i.e., initial survival, initial
distribution, catchability coefficients, migration parameters) or whatever
the deviation is between what is said will be done management-wise and what
actually occurs.

There are 5 specific parameter estimates from the current version of the SSM:

1. Initial survival rate
2. Initial location (location parameter of a beta distribution; the shape
   parameter is fixed at 2.0)
3. US Fishing Effectiveness
4. Canadian Fishing Effectiveness
5. Movement parameter (location parameter of a beta distribution; the shape
   parameter is fixed at 3.0)

The natural mortality parameter is set to zero during the time steps of the
fishing period.

The latest results show a more consistent pattern for the fishing
effectiveness estimates (i.e., the US and Canadian values are in the same
relative position from year to year). The previous results were inconsistent.

Ken has found that the estimated migration rates range from 0 to 86 km/day,
but are generally within observed values.

Ken described his latest work using both an inner model and an outer, or
hypermodel. There are 5 parameters in the inner or SSM. These are generated
from a multivariate hyperdistribution,  a so-called "outer" model. This is
currently formulated as simply the joint distribution of 5 independent
random variables.  Four of the distributions are gamma(alpha[i], beta[i])
i=1,..,4  and one is beta(alpha,beta). The parameters of this
hyperdistribution are called hyperparameters, of which there are 10.  Five
of the hyperparameters are fixed and 5 are estimated from historical data.

Ken noted that his biggest headache now is getting data. Specifically, for
a given year and a given stock he needs two tables (matrices) of data. The
first has areas for rows, statistical weeks for columns, and all CWT
recoveries (by gear type if possible) for each element. The second matrix
has the same row and column indices, but has some consistent measure of
effort for each element.

Ken asked for comments on whether his model system has anything missing
(leakage).

For future work, Ken suggested the following:

-- Use non-normal, non-linear SSM (e.g., log-normal, Poisson)
-- More complex spatial framework (e.g., ocean and Puget Sound migration path)
-- More complex fisheries interaction (e.g., competing gear types and
   different temporal scales)
-- Partitioning sources of variation (i.e., isolate and quantify each of
   the sources of variation, trying to determine the more serious or
   influential sources. Potentially, perhaps, data collection methods might be
   changed.


The meeting was adjourned at 12:30 pm. Next meeting was scheduled for
Tuesday, October 28 at NMFS Montlake Lab.

13. Minutes of the June 19, 1997 meeting of the NMFS Salmon Model review Committee by Admin, 8/19/97

DATE: August 4, 1997

MEMORANDUM FOR: NMFS Salmon Model Review Committee and interested parties

SUBJECT: Minutes of the June 19, 1997 meeting of the NMFS Salmon Model
review Committee

FROM: Jim Norris and Robert Kope


Contents:

1.  Attendance List.
2.  Update on the Proportional Migration Model (Rich Comstock).
3.  Literature Review Update (Dave Caccia).
4.  Review of Migration Algorithms (Jim Norris)
5.  Report on Alaska Transboundary Salmon Model (Norma Jean Sands).
6.  FRAM Status Report (Jim Scott).
7.  Code Design Report (Troy Frever).
8.  State Space Model Report (Ken Newman).
9.  General Migration Modeling Report (Robert Kope).
10.  Model Comparison Report (Jim Scott).


1. Attendance.

Robert Kope (NMFS)
Tom Wainwright (NMFS)
Steve Lindley (NMFS)
Peter Dygert (NMFS)
Jim Scott (NMFS)
Jennifer Gutmann (NWIFC)
Carrie Cook-Tabor (USFWS)
Rich Comstock (USFWS)
Marianne M. McClure (CRITFC)
Steve Caromile (WDFW)
Norma Jean Sands (ADFG)
Ken Newman (UI)
Jim Norris (UW)
David Caccia (UW)
Christine Muongchanh (UW)
Jim Anderson (UW)
Troy Frever (UW)


2. Update on the Proportional Migration Model (Rich Comstock).

Rich Comstock gave an update on data sets and a web interface for the
Proportional Migration (PM) model. In simplest terms, the PM model
evaluates the effects of a proposed management regime by asking the
question: What would have happened in year 19?? if the relative stock
abundances were different, some selective fisheries were implemented, and
the effort levels in some fisheries were changed? For a single run, the
model reports new exploitation rates by fishery for each stock component
(marked, unmarked, wild, etc) relative to the "base" year. If data for more
than one "base" year are available, evaluations can be conducted relative
to each base year.

Rich distributed handouts showing plots of predicted exploitation rates vs
fishery for three components of the Quinalt stock based on six base years
(1986-1991). For each fishery, the plots showed the mean exploitation rate
with "error bars." The "error bars" were really the range of exploitation
rates predicted by the PM model when all six base years were used. Thus,
the "error bars" show the range of variability, but do not represent
confidence limits in the traditional statistical sense.

Rich pointed out that effort data is still missing for some fisheries and
years, and this causes problems with the model. Robert Kope commented that
effort units are not consistent across fisheries, and suggested that it
would be helpful to clearly define the effort units. Ken Newman asked how
efforts are used in the model. Rich explained that they are used to scale
the harvest rates in the catch equations. The model assumes that
catchability is constant.

Rich pointed out that individual stock migration in the PM model is
simulated by mapping which fishing regions (i.e., fisheries) at time t
contribute fish to each fishery at time t + 1. The time steps are one
month. The mapping is based on PSC Statistical Areas from the CWT database.
 The goal is one map for all years. There was some discussion about the
possibility of creating a web tool that illustrated the mapping so users
could see which regions were contributing fish to which fisheries at each
time step. Robert Kope mentioned that NMFS has hired Karen Campbell who
specialized in GIS and that NMFS might be able to contribute to creating
such a tool.

Jim Norris asked Rich how they are handling situations where the effort
data and recovery data do not have a one-to-one correspondence across
areas. For example, the CWT and effort data often look like this:

Area    CWT     Effort
4b       50        100
5        20        100
6        10        100
4b & 5   20

In this example there is effort data for each region, but some tag
recoveries are lumped over two areas. The question is how to allocate the
CWT recoveries that are lumped into more than one area. Rich replied that
someone else who provides them the data (perhaps Jim Packer at WDFW) makes
these decisions. [Note: i=92m still not clear on this. Rich/Carrie: can you
add text here to clarify this.]

Overall, Rich Comstock asked others for input and offered to demonstrate
the model at the end of the meeting.

Jim Anderson suggested that they can run the model on the UW web site.


3. Literature Review Update (Dave Caccia).

David Caccia distributed an outline of his literature review of ocean
migration of salmonids. He noted that "mark recapture data" is very
expensive data to get. He is grouping references by life history
characteristics, and has found that some stocks don=92t migrate very far. He
has some doubt about the accuracy of speed, timing & direction estimates
for smolts and juveniles. He has found that in general migration rates are
not reported very precisely.

David mentioned a reference suggesting that each year chinook migrate back
to near the mouth of the river of origin, with some members moving
up-stream to spawn while others move back out to offshore waters. The group
requested this reference. [The reference is: Brannon, E. and A. Setter.
1989. Marine distribution of a hatchery fall chinook salmon population. Ed.
E. Branon and B. Johnson, Proceedings of the Salmonid Migration And
Distribution Symposium, June 23-25, 1987. School of Fisheries, Univ. of
Washington, pp 63-69. It is an analysis of UW hatchery stock.]

Robert Kope mentioned that dopler sonar techniques are currently being used
in fresh water, and may have some application in marine waters.

Jim Anderson suggested that we should give some more thought about types of
information that will be useful in the models and get information out to
the community.

Robert Kope added it will be useful to have a bibliography of the=
 literature.


4. Review of Migration Algorithms (Jim Norris)

Jim Norris distributed a draft report "Review of Salmon Management Model
Migration Algorithms." A copy of the report is available on the web site at:

/harvest/norris/mig.pdf

The report describes migration algorithms for five models=97-State Space
Model, PSC Chinook Model, PSC Selective Fishery Model, Proportional
Migration Model, and fram=97-in a common matrix notation.

A key finding in the report was a matrix notation for the PM model, which
does not have a migration algorithm per se. Based on this notation, the
report concludes that: "=85 the PM model makes the tacit assumption that for
a given cohort and time step, fish migrate from all donor regions at the
same rate." Jim suggested that we should do some testing to see if there
are violations of this assumption.

Rich Comstock noted that the assumption may not be correct, because the
transition matrix in the PM model is not directly analogous to the
migration matrices of the other models. Jim agreed to delete the statement
from the report and requested suggestions for an intuitive description of
the PM matrix terms.


5. Report on Alaska Transboundary Salmon Model (Norma Jean Sands).

Norma Jean Sands briefly described a fisheries bioeconomic model she has
been developing to examine allocation issues in Pacific salmon between the
US and Canada.  The model currently incorporates four stocks (sockeye and
pink salmon from each country) and twelve fisheries from the northern
boundary area between Alaska and British Columbia.  The migration of stocks
is through the waters of the other country first and then the country where
spawned, and is through outer fisheries, then inner fisheries, and finally
terminal fisheries.  Terminal fisheries do not intercept fish that spawn in
the other country.  The migration parameters (percentage of each stock that
migrates through the waters of each fishery) were determined by an iterative
process to best give the average catches, interceptions, escapements, and
harvest rates observed in the early 1990s.   This gives a linear flow of
fish through the system, but not all fish within a stock follow the same
path and stocks are not equally vulnerable to the various fisheries.

The model objective is to test various allocation schemes and compare how
they do on stock productivity and fisheries economics.  The model solves for
a sustainable stable solution and does not incorporate annual variability in
stock production or prices.  The three allocation schemes tested and their
modeled consequences were:

1. Balancing interceptions resulted in fisheries only being conducted in
terminal fisheries (i.e., interceptions =3D 0) and in stock production being
underutilized by the fisheries.

2. Maximizing sustainable yield of all stocks simultaneously resulted in
high stock production and economic gain, although the share of the economic
gains between the two countries was not proportional to stock production by
each countries streams.

3. Maintaining a fixed harvest rate by each fishery that approximates the
harvest rates experienced in the early 1990s resulted in relatively high
stock productivity and fisheries benefits for both countries.

Jim Scott asked how the observed harvest rates were estimated.  Norma
replied that they were based on run reconstruction of the stocks prepared by
the Northern Boundary Technical Committee.


6. FRAM Status Report (Jim Scott).

One of the provisions of the agreement between the Tribes and WDFW
regarding selective fisheries was to modify FRAM so it can analyze
selective fisheries. Jim Scott reported that Jim Packer at WDFW was taking
the lead on this issue. His approach is to double the number of stocks to
allow for marked and unmarked components. The exploitation rates will be
scaled differently on marked and unmarked fish in selective fisheries.
There will be a meeting at WDFW on July 15 to discuss FRAM modifications in
more detail.

Jim also mentioned that NMFS and the PFMC are sponsoring a salmon model
workshop in Santa Rosa, CA on October 8-9.


7. Code Design Report (Troy Frever).

Troy briefly described some relevant code elements of the adult upstream
migration model he is working on with Rich Zabel. The river is divided into
"reaches" separated by "locations." The model currency is a cohort of fish.
Each cohort has a matrix of abundances by time step (rows are locations;
columns are time steps). Each element of the matrix is the number of fish
exiting that location at that time step. Thus, the sum of the columns is
not 1.

The computation engine loops over time steps, locations, then cohorts. Some
of the code elements that may be used in the new model are:

= chornographer;
= geographer;
= cohort manager.


7. State Space Model Report (Ken Newman).

Ken distributed a report summarizing his updates to the State Space Model.
A copy of the report is available at:

/harvest/newman/statespace.pdf

He is using the same Humptulips data as in the past, but has changed from a
Gamma Distribution to a Beta Distribution for modeling migration. For each
year and stock, the model estimates six parameters: 2 for the Beta
Distribution describing initial abundance; 2 for the catch equations (one
each for Canada and US fisheries); and 2 for the Beta Distribution
describing movement.

The latest results showed quite a bit of difference in the two fishing
mortality parameters. In 1985 the Canadian parameter was much larger than
the US parameter, but in 1986 the US parameter was larger. It was hoped
that these parameters would show more consistency because they are similar
to catchability coefficients.


8. General Migration Modeling Report (Robert Kope).

Robert noted that most of the models, except PM, are all single pool models
that usually ignore spatial effects. Also, the PSC Selective Fisheries
Model is the same model that has been used to model tuna migration.

Robert introduced the "Box Car" model used to model Fraser River sockeye.
This model assume a constant migration rate and dispersion, but has no
diffusion as the migration goes on. Thus, the migration matrix for this
type of stock has a series of 1s on the sub-diagonal, and 0s elsewhere. At
each time setp, all the fish in one area move into the next region. When
fishing occurs in a region, a large gap in the distribution occurs that
persists throughout the rest of the migration.

Ken Newman questioned how the Fraser River managers estimated the constant
migration rate. No specific answer, other than it comes from the technical
management group every year. [Robert: Is this right? Maybe you can fill in
some details here.]


10. Model Comparison Report (Jim Scott).

A work group met on April 22 in Olympia to discuss methods for comparing
alternative migration algorithms. Jim Scott handed out a summary of this
meeting. The general approach is to create a synthetic Monte Carlo dataset
with minimal complexity to facilitate evaluation of alternative algorithms.
The dataset will be designed with the intent of developing an understanding
of the effects of the following factors on model performance:

= varying levels of interannual variability in the migration of a stock;
= variability between stocks in the underlying migration patterns;
= variability in the temporal pattern and magnitude of fisheries;
= varying levels of uncertainty in the estimated number of CWT recoveries;
= level of geographic and temporal resolution.

Measures of model performance will include the MAPE and MPE for catch,
escapement, and exploitation by stock. It is anticipated that the relative
performance of alternative algorithms may depend upon the factors
identified above.

Jim noted that he needs the State Space Model algorithm in order to do this
work. Ken agreed to provide it.

Jim Norris suggeted using an individual based model (IBM) to generate the
synthetic data set. There was some discussion about the problem of what
algorith to use to generate the Monte Carlo synthetic data. Peter noted
that if we use Ken=92s State Space Model (SSM) in the IBM to generate
synthetic data, the SSM will perform the best in the tests. There was no
clear resolution of this problem.

Jim Anderson commented that a model with both physical and biological
elements would be a good model.


The meeting adjourned at 4:00 p.m.  Next meeting is scheduled for August 22
at NMFS Montlake lab, at 9:00 a.m.

12. Progress on running SSM for mgmt planning by Ken Newman, 7/25/97

HAM Group

 I've completed one phase of addressing the problem that I
referred to last week as "How to run the SSM for pre-season 
mgmt and in-season adjustments".  Below is a brief description
of the approach and some questions.  If anyone is interested 
in seeing histograms of catch output from stochastic simulation 
runs- the kind of output I'm thinking managers would then examine
to choose between management regimes, let me know.

Ken

---------------------------------------------------------------
I. The overall structure is the basic  INPUT -> MODEL  -> OUTPUT,

 INPUT: 
 R      = release number, number entering ocean at Age 2
 Effort = a 11 area by 16 week matrix of US and Canadian 
          commercial troll effort 
 6 "Free" parameters = 
    initial survival rate from release time to beginning of harvest
    alpha parameter of the Beta distribution for initial location
    US catchability coefficient
    Canadian catchability coefficient
    alpha parameter of the Beta distribution for movement
    beta     "      "   "   "      "          "     "
 2 "Fixed" parameters = 
    beta parameter of the Beta distribution for initial location 
    natural mortality parameter

  MODEL:
   "Simply" the state-space model run forward in time:
       n[0] is vector of initial abundance
       n[1] is the vector after time 1 mortality and movement
       c[1] is the vector of time 1 catch from n[1] vector
       n[2] is the vector after time 2 mortality and movement
        ....
       n[16] is the final period abundance prior to harvest
       c[16] is the final period harvest (mostly escapement to
            natal area)

   OUTPUT:
     A vector of initial abundance
     A 11 by 16 matrix of abundance for time periods 1 to 16
     A 11 by 16 matrix of catch for time periods 1 to 16

---------------------------------------------------------------
II. Simulations can be done in deterministic or stochastic mode.

If done deterministically, R, Effort, the 6 free parameters, 
the 2 fixed parameters all remain constant, and the expected
abundance and catch matrices result.  Only one run needs to be 
done.

If done stochastically, the user can specify on a per component
basis  whether or not R, Effort, the 6 free parameters, or the 2
fixed parameters can vary at random.  Even if R, Effort, and
the parameters remain constant, the abundance and catch vectors
will be stochastic, generated as Multivariate Normal random
variables.   Several runs should be done and the user ends up
with a 3 dimensional array, a collection of abundance and 
catch matrices indexed by simulation order.

 I've been producing histograms of the resulting total catch by
area...this includes pictures of the uncertainty in terminal area
escapement, as well.

---------------------------------------------------------------
III.  Here are my questions (and more details)

1. Any ideas on the stochastic structure of R?  How to quantify
the uncertainty in actual release number?  

As written now, the user inputs a percent relative error for R,
like 10%, which is converted to standard deviation, s= X% * R
(e.g. if r=50,000 and 10% is input, s = 0.1 * 50,000 = 5,000),
and R is simulated as Normal rv with mean R and this std dev s.

We will eventually want to distinguish between hatchery and wild 
stocks in particular, with hatchery release numbers having far 
less uncertainty(?).  

2. Same for Effort matrix?

As written now, the inputted effort matrix is used to provide
the mean for Poisson random variables.  E.g., if the effort matrix
says 1000 units of effort in Brookings in the 5th week, in stochastic
mode a Poisson rv with mean 1000 is generated.  The Poisson has
the nice feature that the resulting value will remain non-negative.

E.g., here's the first 2 rows of an inputted effort matrix (area by time)
along with 3 simulated sets:

Input:
 300  739  624  496  934  900  758  542  384  316  317  206  153  200   42   44
1305  739  624  496  934  900  758  542  384  316  317  206  153  200   42   44

Sim #1:
 309  694  616  530  862  934  723  503  399  321  309  188  160  199   34   43 
1234  762  606  436  893  908  791  555  376  293  322  206  172  189   47   33 

Sim #2:
 293  749  609  471  946  896  765  526  368  332  338  231  152  193   35   52 
1291  718  650  501  846  868  766  557  401  322  325  187  158  209   37   52 

Sim #3:
 355  732  603  507  977  910  754  576  369  313  328  203  143  210   43   45 
1338  709  599  463  944  926  765  555  390  291  317  207  158  230   40   48 

3. Regarding free parameters- I'm currently just sampling from the 3 sets
of parameter estimates for the 1985, 1986, and 1987 data.  Namely, randomly
draw from one of these 3 rows:

Alpha-  Init    US    Canad  Alpha-  Beta-
Loc     Surv   catch. catch. move    move
7.975   1.474 13.940  2.816  8.469   1.623 
3.639   4.152  6.212  4.303 10.292   3.064 
4.017   1.372  2.755  2.257 66.840  24.150 

 This is a very crude sampling of the parameter space and points out the
need to at least get more years of CWT and effort data together.

4. Regarding fixed parameters- currently taking a uniform random sample
 from 2 to 4 for Beta-Location parameter
 from 0 to 10 for Natural-Mortality parameter
(when Natural mortality parameter is 10, the survival rate in the absence of
 fishing during the 16 week period is about 85% =  exp(-10/1000*16).)

 Any other ideas, especially on natural mortality during this period?

---------------------------------------------------------------
IV. Remarks/questions about the next phase-
 1. Do you know of any other areas of uncertainty that should be considered?

 2. Uncertainty in the free parameters is probably best dealt with by
   the Empirical Bayes (or Bayes Empirical Bayes) approach.  I.e.,
    a. specify a probability distribution for the parameters themselves
    b. use the historical data (several years of data) to estimate the 
       hyperparameters of this probability distribution
    c.  when running the model in simulation mode, a random draw is
      then made from the probability distribution using the estimated
      hyperparameters.

   An alternative is to just have a large set of historical parameter estimates
   to sample from (rather than just 3)...but I think this "non-parametric"
   approach will yield quite irregular histograms of outputs.  Just sampling
   from the above 3 vectors yields some very bumpy pictures.  The EB (or BEB)
   approach will smooth these histograms out.

   The EB (or BEB) steps are not trivial, however.  Specifying a joint 
   probability for the probabilities is not automatic.  And even given such
   a distribution, estimating the hyperparameters looks to be computer 
   intensive.  Once this is "done", however, and we have a distribution,
   sampling from the distribution for simulations should be relatively
   simple and fast.  This is the problem I'm focusing on now.

   In any case- whether go with EB or sample from "unsmoothed" historical
   parameter estimates, we really need to get those CWT and effort matrices
   put together.  Just adding several more years of the Grays Harbor data
   would be a great start.

10. Newman's suggestions for next step in SSM work by Ken Newman, 7/16/97


  I've sent some of you an updated, manuscript of the SSM for coho
which includes new work since the June 19 mtg- primarily a new and
improved migration component thanks to Jim Norris' suggestions.
Anyone who hasn't got a copy and would like one- please let me know.



  Below I've listed some ideas on where to go next, have highlighted
some areas where help is needed (with *), and would appreciate any feedback
you have on any of the points, and other ideas about what's important to
address.  They're listed in descending order of (perceived) importance-Ken Newmans
 #1 may be far and away the most important. 


Ken


-------------------------------------------------------------------



 1. How to run the SSM for pre-season mgmt and in-season adjustments
   *a. what are the outputs of interest?  It seems like they
  would at least be catch and escapement for a stock, total catch
  for a fishery/user group



    b. inputs?  as currently designed, there are at least 3 types
  of inputs-
     (i)   abundance estimates at beginning of period (or for coho,
           at time of release)
     (ii)  effort levels and/or quotas by time and area
     (iii) parameter estimates for the particular year (like initial
           distribution parameter)



   *any ideas on eventual number of stocks in the model(s)? for
   coho and for chinook?
   *any success relating mgmt seasons/schedules to observed effort
   levels?



    c. uncertainty of output- related to some of the above inputs,
    there are 5 sources of uncertainty in the output, e.g., estimates 
    of catch for a stock
     (i)   uncertainty in abundance estimates at t=0
     (ii)  uncertainty in actual effort levels
     (iii) uncertainty in the values of the parameters for
           that year 
     (iv)  natural variability- conditional on abundance, effort,
           and parameter values there will still be "error"
     (v)   uncertainty in affect of "fixed" parameters on results
           (sensitivity to assumptions)



    d. dealing with the uncertainty- I am keen to try an Empirical
     Bayes approach to accounting for these different sources of
     uncertainty.  Some of you might be familiar with Adrian Raftery's
     (http://www.stat.washington.edu/raftery/Research/Whales/whales.html)
     (and others) work with the Int'l Whaling Commission for modeling
     Bowhead whale pop dynamics.  He uses a Bayes Empirical Bayes
     approach that the Int'l Whaling Commission now accepts as the
     "std" methodology.  There's some similarity with salmon
     harvest mgmt problems and what I'm thinking about trying.



     The Empirical Bayes approach is not really Bayesian per se,
     it just recognizes that the parameters are random, like the
     natural mortality rates in the ocean will vary from year to
     year.  We would formulate probability distributions for the
     parameters- the probability distributions would have 
     "hyperparameters".  E.g., the distribution of initial survival
     rates is Beta with parameters alpha and beta. Then use the 
     historical data, e.g., 10 years of Grays Harbor coho, to estimate
     the hyperparameters.  Next to make a forecast for the coming
     year could sample from the probability dist'ns for the parameters
     and run the model.  Could repeatedly do this to get a probability
     distribution for the outputs- draw histograms of predicted escapement
     for a stock, etc.



     * what's the prognosis on getting historical coho data for effort 
       and CWT recoveries into rectangular matrices?



 2.  Non-normal state-space models.     
      - the normality assumption can lead to some unreasonable predicted
      abundances and catches, negative ones, when abundance and effort
      are quite low
      - it might be better to use something like a Poisson dist'n for
       the catches, (like Ray Hilborn did in a 1990 CJFAS article on tuna 
       migration) and maybe abundances
      - this will require different techniques for estimating the historical
       parameters (Monte Carlo methods), but the simulation for pre-season
       planning might be simple 



 3.  Integrating multiple types of fisheries.
      - to deal with overlapping fisheries, say sport and troll, the catch
       equations can be modified, and the observation equ'n in the SSM
       modeled (haven't worked this out but don't think it'll be hard)
      - to deal with fisheries for which the effort has different temporal
       resolution will be trickier (e.g. monthly sport effort vs weekly
       troll effort)



 4.  Other modifications to SSM
     - more complex spatial framework for inside fisheries
     - chinook maturation schedule/component
     - links with or integration of ocean conditions
     - putting in switch for catch ceiling/quotas  
     - selective fisheries
     - incidental mortality
     - *other things? 



  5. Going into "production mode"
     - calculating historical parameter estimates for a wide range of
       stocks over several years
     - carrying out tests to determine what parameters are or are not
       stock specific

9. Apr 18, 1997 Minutes by Jim Norris, 6/17/97


DATE: June 17, 1997
MEMORANDUM FOR: NMFS Salmon Model Review Committee and interested parties
SUBJECT: Minutes of the April 18 meeting of the NMFS Salmon Model Review Committee
FROM: Jim Norris and Robert Kope


CONTENTS.

1. Jim Norris and Troy Frever report on model code development.

2. Rich Comstock report on data sets for model comparisons.

3. Jim Scott report on model functionality.

4. Jim Anderson report on reference literature.

5. Workgroup tasks and priorities.

6. Next meeting.


1. MODEL CODE DEVELOPMENT.

Jim Norris gave a progress report on model code development. He announced that a discussion page has been established on the web to organize dialog about the model development. The address is:

/harvest/discussion/

As of April 18, the page includes minutes for the Dec 20, 1996 and Feb 21, 1997 meetings and discussion sections for:

-- Model Vision
-- Main Engine Overview
-- Code Object Evaluation
-- Time Objects
-- Geographic Objects.

Each code object section includes a general description of the object, what services the object provides, the data needs for the object, and the methods the object will have. Some objects also include a discussion about how existing models treat the object.

Jim presented a summary overhead of the Model Vision (see web page for full text of the Model Vision). There was a general discussion about model parameterization and calibration.

Doug Eggers noted that these types of large models tend to require more information than is currently available and this creates calibration problems. Jim commented that we will undoubtedly have to decide which parameters to fix and which to allow to float free during any calibraton process. His goal is to develop a flexible calibration framework such that the fixed parameters, control variables, and objective function can be defined relatively easily. He mentioned the "Solver" tool in Excel as an example. Pete Lawson commented that we will have to consider the trade-offs (e.g., predictive power) between a model with lots of parameters vs one with fewer parameters.

Jim Norris presented the draft Main Engine Overview for discussion. He stated that his objective was to create a model framework organized around biological processes rather than currently available data. Here's the draft main computing engine:

void RunTheModel()
{
for (int year = 0; year < Chronographer->nyears(); ++years) {
YearInit(year);

for (int TimeStep = 0; TimeStep < Chronographer->nsteps(); ++TimeStep){

PhysEnvManager.update_physical_environment(); BiolEnvManager.update_biological_environment(); AgeManager.age_cohorts();
NatMortManager.take_natural_mortality(); GrowthManager.grow_cohorts();
FisheryManager.take_fishing_mortality(); SpawningManager.spawn_cohorts();
MigrationManager.migrate_cohorts();
DataManager.timestep_wrapup();
}

DataManager.year_wrapup();
}
}


Jim explained in general terms what each component will do and referred committee members to the web page for full written descriptions. He noted that the FisheryManager object was still a little fuzzy due to the potential need to adjust catches over several time steps to meet some management objectives (e.g., allocation between Tribal and Non-Tribal fisheries, allocation between gear types, meeting escapement goals).

Gary Morishima asked how meta-populations (e.g., the Columbia River stocks) would be treated during the spawning process. Jim replied that spawning will occur by geographic areas rather than individual stocks, so hopefully any meta-population processes could be incorporated within the spawning area object. Gary also suggested that we need to include a method for simulating catch sampling (i.e., data gathering). Jim felt that the FisheryManager object would handle this function.

There was a discussion about the TimeStep definition. The currently proposed definition is:

"A TimeStep is a period of time during which all fisheries within a given region are assumed to be working on the same abundance of fish."

This definition assumes that fisheries within a given region and time step do not interact with one another. Jim noted that for gauntlet type of fisheries the user will have to select time steps appropriately to model the impacts one fishery may have on another. He also noted that Tom Wainwright had suggested on the web page that the definition should be relaxed or generalized so a TimeStep wasn't defined only in terms of the fishing process. Jim noted that the central issue regarding the TimeStep definition is what assumptions we want to make regarding the types of processes and algorithms we will allow during a TimeStep. He also noted that further comments about the TimeStep definition are on the web page.

Jim Scott suggested that the growth process should occur prior to the natural mortality process. Jim Norris concurred and will make the change.

Gary Morishima asked who would be building the models within the flexible framework--users, UW, PSC, States, Tribes, others? Jim answered that the initial version will be developed by UW and this committee, but later versions may be developed by anyone. Troy noted that we are attempting to build good abstract processes to cover any forseen (and unforeseen) implementations, assuming that somebody will eventually want to add new modules.

There was further discussion of model calibration and the problem of multiple solutions to objective functions that span multiple time steps (e.g., allocation and escapement goals). No specific recommendations emerged.

Jim Scott asked if fish growth would be independent of fishery. Jim responded that modeling growth and the potential effects of size selective fisheries would be a difficult problem. He described a method he used for sablefish analysis that involved dividing a cohort into 10 growth groups--five for each sex--each with a separate L infinity term in the von Bertalanfy growth function. This technique might create too many cohorts to keep track of in the model. Jim mentioned that it might be helpful for analyzing the effects of size selective fisheries on a small synthetic group of stocks.

Jim stated that the goal for October was to create a prototype model that (1) had a migration component based on Ken Newman's migration model and (2) would mimic the PSC Chinook Model when configured for a coho stock. The next step would be to mimic another existing model (e.g., FRAM, PM, PSC Selective Fishery Model). There was further discussion about how to compare the existing models and which ones should be selected for mimicing by the new model. Jim noted that the goal was to be able to put all of the models into a migration matrix format, and that the parameters for each migration matrix would be generated outside the main computing engine. No specific conclusions were reached during this discussion, but later in the meeting a work group was formed to address these issues.


2. DATA SETS FOR MODEL COMPARISONS.

Rich Comstock gave a progress report on data availability, organization, and extraction methods for migration modeling. These data also will be used to compare different migration models.

Two types of data are available:

1. CWT data (including release, recovery, catch/sampling effort data). These will be essential for fitting Ken's State Space Model.

2. Inter-agency run reconstruction data.

There was a brief discussion of the CWT data with three main points:

1. Rich plans to have the CWT Mark Committee examine the map of the PSC location codes to statistical areas.

2. Gary Morishima recommended that Rich check on a persistent problem in the CWT database, namely that the expansion factors sometimes do not match with the Catch/Sample File. Rich said he would check on this.

3. Rich reported that there were large numbers of recoveries in Alaska attributed to mixed gear and unknown gear categories. There is a problem allocating these recoveries to individual fisheries when the gear type is unknown.

Rich and Carrie gave a brief report on the Proportional Migration (PM) Model, including some handouts of class structure from the Python code. Jim Norris asked for a clarification about how geographic areas and fisheries were characterized in the PM Model. For example, the PM Model allocates fish to two troll fisheries and one sport fishery off the West Coast of Vancouver Island (NWVI Troll, SWVI Troll, WCVI Sport). Rich (and Peter Lawson) confirmed that the PM Model treats these as three distinct and non-overlapping areas with separate abundances of fish in each area.

3. JIM SCOTT REPORT ON MODEL FUNCTIONALITY.

Jim Scott distributed an outline for model functionality (listed below) along with several samples of output from current models (e.g., FRAM).

1. Models currently provide point estimates for a wide range of statistics including:

a) catch (shakers, non-retention, total mortality) by stock, age, time period, and fishery;
b) exploitation rates by stock and fishery;
c) exploitation rate scale factors by fishery and time period;
d) stock exploitation rate indices (ratio of predicted exploitation rate to base period);
e) cohort size by stock, age, and time period;
f) mature run size by stock, age, and time period;
h) escapement by stock, age, and time period;
i) treaty/nontreaty allocation accounting.

2. The next generation of models must provide a useful measure of the uncertaintly of the model predictions. Measures could include prediction intervals for key statistics, risk to stock perpetuation, future production versus current catch, etc.

3. The level of stock resolution is currently roughly by region within Puget Sound (e.g., Hood Canal, South Sound), by groups of rivers on the Washington Coast (e.g., Queets and Quinalt, Hoh and Quillayute), and at a greater level of aggregation in other areas (e.g., Oregon Coast).

4. Stock resolution will need to increase in the new model to meet management needs. In order of increasing complexity, these may include:

a) marked and unmarked;
b) hatchery and wild;
c) management units within regions;
d) stocks within management units.

5. The coho version of FRAM has 66 fisheries (each occurring in up to 13 months) from Southeast Alaska to California. Careful consideration will need to be given to the advantages of reducing the resolution to increase the number of CWT recoveries vis a vis increasing the resolution to meet management needs and accurately modeling stock distributions.

6. The model must be easy to use by biologists working under stressful conditions outside of the office. The majority of the model analyses are now done as part of the annual PFMC preseason planning process, an approximately one month period from March to April in each year. Management proposals are typically developed in consultation with user groups and evaluation of the proposals is expected to occur with 12 hours.

There was a discussion about data exchange protocols between the new model and existing models. Troy gave a brief overview of token based input, which will likely be the input format used by the new model. It was decided that until formatted output can be generated from the new model, the new model will simply write data output in ascii format that can be read by existing models. Peter Lawson mentioned that Phil Flanders from ODFW (Newport) was using Visual Basic to create tools for data exchange between models that may be useful for the current project. Jim Norris agreed to contact Phil to learn more about the tools.


4. LITERATURE REVIEW.

Jim Anderson reported that graduate student Dave Caccia was working on a literature review of salmon migration, but does not have a formal product ready at this time.


5. WORKGROUPS.

A Model Comparison Work Group was formed. Rich Comstock will lead with Jim Scott, Peter Lawson, Robert Kope, and Jim Norris participating. A meeting of the work group was scheduled for April 25, 1997 in Olympia to develop specific tasks for the group.


6. NEXT MEETING.

The next meeting will be 9:00 am Thursday June 19 (following the Resource Modeling Association meeting at UW) at the NMFS Montlake Lab.

8. Geographic Objects by Jim Norris, 4/16/97

2. GEOGRAPHIC OBJECTS +++++++++++++++++++++++++++++++++++

2.1. Geographic Area. ***********************************

2.1.1. General Description.

The highest resolution geographic object in the model. These could be lat/lon blocks, statistical areas, or any other type of area designation that partitions the total geographic range of the model into discrete elements. Geographic areas remain constant over all years and time steps. These are the primarly link between the spatial distribution of individual cohorts and the environment, including the distribution of fishing effort.

2.1.2. Services Provided.

-- at each time step links the spatial distribution of individual cohorts with the current environment;

-- ?? Should the geographic objects contain the methods for some biological process, such as natural mortality?? If we want to allow for algorithms that involve density dependent interactions between stocks, it seems like they should;

-- at each time step links the spatial distribution of individual cohorts with the spatial distribution of fishing effort;

-- provides information needed to plot a geographic area on a map.

2.1.3. Methods.

-- plot the area on a map;

-- return geographic coordinates defining the area;

-- return the area common name;

-- return the area model code name;

-- return the physical size of the area in square miles;

-- return the current environmental characteristics of the area.

2.1.4. Data.

-- list of cohorts that can potentially inhabit this area;

-- list of fisheries that can potentially work in this area;

-- common name;

-- model code name;

-- lat/lon coordinates for the perimeter points of the area;

-- current environmental data (eg SST, salinity, plankton biomass, current direction and strength, streambed characteristics).

2.1.5. PSC Chinook Model.

The PSC Chinook Model has no specific geographic objects. However, by separating the fisheries into preterminal and terminal categories, there is a defacto concept of fish migration between geographic areas. Thus, what are termed maturation rates in the PSC Chinook Model will become migration rates in the new model.

2.1.6. PSC Selective Fishery Model.

This model contains five geographic areas:
-- Strait of Georgia;
-- South Puget Sound;
-- Strait of Juan de Fuca and San Juan Islands
-- West Coast Vancouver Island;
-- Washington/Oregon Ocean.

2.1.7. FRAM.

This model is similar to the PSC Chinook Model in that it has no specific geographic area, but does partition the fisheries into preterminal and terminal categories.

2.1.8. PM Model.

This model does not have specific geographic objects. However, fishery definitions act as defacto geographic objects. For example, the sport fisheries are generally defined by statistical reporting area, such as ²Area 7², ³Area 8², ³Bouy 10², etc. There are 45 fisheries/geographic areas in this model.

There are three gear types used to define fisheries and each gear type partitions the geographic range differently. For example, within the troll gear group there is a Northwest Vancouver Island and a Southwest Vancouver Island and within the sport gear group there is only one fishery for all of the West Coast of Vancouver Island.

At each time step cohorts are apportioned to each fishery as follows: ³The sums of the changes in stock-specific mortalities from all previous time steps are distributed to the next time step based on the stockıs proportional initial distribution of abundance among fisheries in that time period.² Everyone got that??!!

Correct me if Iım wrong, but I think this methodology assumes that fish available for the WCVI sport fishery are not the same fish that are also available for the two WCVI troll fisheries. In other words, the model tacitly assumes that there are three distinct areas off WCVI.


2.2. Model Range. **********************************

2.2.1. General Description.

This is a list of all the geographic areas in the model. This list will be used to index geographic areas for algorithms requiring looping through geographic areas. This object will also contain the information required to draw the outer perimeter of the range.

2.2.2. Services Provided.

-- organizes geographic objects in the model.

2.2.3. Methods.

-- return the geographic coordinates defining the outer perimeter of the range;

-- return the physical size of the range in square miles.

2.2.4. Data.

-- list of geographic objects in the model;

-- lat/lon coordinates for the outer perimeter of the range.


2.3. Stock Range. ***********************************

2.3.1. General Description.

This is a list of all the geographic areas that a given stock can inhabit. This object will be useful for (1) illustrating geographic ranges on a map, and (2) making certain algorithms more efficient by looping over a subset of all geographic areas.


2.4. River Reach. ***********************************

2.4.1. General Description.

This is a special type of geographic area that includes data unique to freshwater habitats (eg river morphology, spawning habitat quality, etc).

2.4.2. Services Provided.

-- provides environmental information (physical and biological) to biological process functions, such as survival rates and production functions.

2.4.3. Methods.

-- return data.

2.4.4. Data.

-- DEM data (lat/lon/elevation) for the river bed;

-- DEM data (lat/lon/elevation) for the watershed;

-- river flow schedule;

-- river temp schedule;

-- other river parameters (eg gas saturation, turbidity);

-- river bed condition (eg gravel size, % suitable spawning habitat).


2.5. Spawning Area. *********************************

2.5.1. General Description.

An aggregation of River Reach objects used by a given stock for spawning. This object might contain aggregated data needed for production processes (eg total spawning habitat quality).

2.5.2. Services Provided.

-- provides indexing for the spawning process looping (ie at each time step the spawning process will loop through the spawning areas instead of the stocks);

-- aggregates all information needed for spawning processes;

-- ?? I think the spawning area should also contain the spawning algorithms (as opposed to stock or cohort objects containing the algorithms), since we want to be able to allow for interactions between stocks (eg hatchery and wild) spawning within the same area??;

-- links freshwater habitat characteristics with biological processes;

-- links human land use practices with freshwater habitat characteristics.

2.5.3. Methods.

-- aggregates and summarizes data from several river reaches;

-- returns data to production process or survival equations.

2.5.4. Data.

-- list of river reaches.

7. Time Objects by Jim Norris, 4/03/97


1. TIME OBJECTS +++++++++++++++++++++++++++++++++++

1.1. TimeStep. ***********************************

1.1.1. General Description.

Within a given year, a TimeStep is defined to be a period of time during which all fisheries within a given region are assumed to be working on the same abundance of fish. That is, fisheries in the same region do not interact with one another.

In mathematical terms, the catch equations for all fisheries in the same region and time step will have the same abundance of fish (for each cohort) as an independent variable. For example, if a time step is one week and two fisheries are operating within the same region in that week (eg sport fishery and commercial gillnet fishery), the catch equations for each fishery might be something like:

C1(i) = HR1 * N(i)

C2(i) = HR2 * N(i)

where C1(i) and C2(i) are the catches of each fishery for cohort i, HR1 and HR2 are the harvest rates for each fishery, and N(i) is the abundance of cohort i during that time step in the given region.

We can generalize this definition to include the physical and biological state of the system. That is, we assume that within a given time step the physical and biological states of each area are constant. These non-salmon variables can then be used by all biological processes as independent variables.

Note that time steps may be of any length; and each year may have different time step definitions.

We are still a bit fuzzy on how multiple time step iteration routines will be controlled (see FisheryManager section below).

1.1.2. Services Provided.

-- organizes model computation flow;
-- link between real-world calendar time and model time;
-- link between all model objects that are time dependent.

1.1.3. Methods.

-- given a calendar time, return the model time step;
-- return the beginning and ending calendar dates;
-- return the beginning and ending julian dates;
-- return the number of days in the time step;
-- return the number of weeks in a time step;
-- return the number of workdays in a time step;
-- return the number of weekend days in a time step.

1.1.4. Data.

-- calendar dates (including year) for beginning and end of the time step;
-- a list of cohorts in the model;
-- a list of model objects that are time dependent (?).

1.1.5. PSC Chinook Model.

This model is designed to conduct analyses over a multi-year time horizon. It has effectively two time steps each year, referred to as ³preterminal² and ³terminal.² There are no calendar dates associated with these time steps.

All natural mortality is assumed to occur prior to harvesting during the preterminal phase. Migration (maturation) occurs at the end of the preterminal phase.

When catch ceilings are specified for ocean net fisheries (which harvest both mature and immature ages of some stocks), the harvest computation algorithm requires iteration over both time steps. The CRiSP Harvest version of this model includes a third phase called ³In-River,² during which river harvests are computed for the Columbia River fisheries.

1.1.6. PSC Selective Fishery Model.

This model is designed primarily to conduct analyses within a single year. It has 52 time periods per year. Within each time step the model computes three processes: natural mortality, fishing mortality, and migration (including escapement). The model makes assumptions to insure that fishing mortality computations do not require iteration over multiple time periods.

A weekly time step is necessary for three reasons.

(1) A small time step is needed to simulate the effect of different minimum size limits. Since coho grow rapidly, an accurate estimate of the effect of a size limit requires a relatively fine time scale.

(2) The model is formulated as a series of independent, discrete processes. A small time scale is necessary in order to reduce the error introduced by the lack of interaction among the processes.

(3) The algorithms used to model the selective fisheries assume that fish released are not susceptible to recapture within the time step. A small time step is required for this to be an acceptable assumption.

1.1.7. FRAM.

This model is capable of multiple year analyses with multiple time steps within a year. If multiple years are used, the number of time steps within a year is fixed for all years.

In the current coho configuration (March 1997) it appears that time steps are calendar months. For chinook, the time steps are:

(1) January - April
(2) May - June
(3) July - September
(4) October - December.

1.1.8. Proportional Migration (PM) Model.

The PM model is designed for single year analyses. In current configuration (June 1996) it has six time steps within a year, defined as follows:

(1) January - May
(2) June
(3) July
(4) August
(5) September
(6) October - December.

Fishery simulation proceeds chronologically by time step. Fisheries within a time step are simulated in an independent fashion. Therefore, ordering of the fisheries has no effect on the outcome.

1.2. Calendar Year. ************************************

1.2.1. General Description.

An index variable for real time (e.g., 1995, 1979). Most input data are indexed in real time. Model variables are often indexed starting with 0 or 1.

1.2.2. Services Provided.
-- a link between real time and model time.

1.2.3. Methods.
-- return model year given calendar year;
-- return calendar year given model year.

1.2.4. Data.
-- calendar year;
-- starting and ending calendar dates;
-- model year.


1.3. Model Year. ***************************************

1.3.1. General Description.

An indexing variable for controlling model flow. A period of one year in the model beginning with the first calendar year of the model simulation. For example, if the model simulation runs from 1979 through 1998, then calendar year 1979 is model year 1 and calendar year 1998 is model year 

1.3.2. Services Provided.
-- a link between calendar year and model year;
-- orders computation flow in the model.

1.3.3. Methods.
-- return calendar year given model year;
-- return model year given calendar year.

1.3.4. Data.
-- model year;
-- starting and ending calendar dates;
-- calendar year.

1.4. Statistical Week. ******************************

1.4.1. General Description.

A weekly time period within a given year during which some data are collected. Statistical weeks start on a Monday (?) and end on a Sunday (?). Thus, the starting and ending dates for statistical weeks change from year to year. Since much of the fishery data is collected and archived by statistical weeks, we anticipate that statistical weeks will be the time steps in the model.

1.4.2. Services Provided.
-- link between some input data, calendar date, and model date.

1.4.3. Methods.
-- return calendar year;
-- return model year;
-- return starting and ending calendar dates.

1.4.4. Data.
-- starting and ending calendar date.

1.4.5. PSC Selective Fishery Model.

The weekly time periods in this model are probably statistical weeks.

1.5. Calendar Month. ***********************************

1.5.1. General Description.

An index variable for some types of input and output data. A real time month (eg Jan, Feb).

1.5.2. Services Provided.
-- link between some data and model input/output.

1.5.3. Methods.
-- return starting and ending calendar dates;
-- return included statistical weeks;
-- return calendar year;
-- return model year.

1.5.4. Data.
-- starting and ending calendar date.

1.6. Age ************************************************

1.6.1. General Description.

An index variable commonly used with input and output data, but also some model parameters. Mathematically, age is the Calendar Year (plus the TimeStep) minus the Brood Year. 

1.6.2. Services Provided.
-- indexes variables and parameters dependent on age;
-- provides more exact age to growth equation.

1.6.3. Methods.
-- return age given Brood Year, Calendar Year, and TimeStep.

1.6.4. Data.
-- age to the nearest year;
-- age in years to two decimals.

1.7. Brood Year. ****************************************

1.7.1. General Description.

Identifies the year in which the parents of a cohort spawned. Since salmon usually spawn in the fall and the eggs usually hatch in the winter or spring, the Brood Year usually is not the ³birth² year.

1.7.2. Services Provided.
-- Links cohorts to the Spawners that produced them.

1.7.3. Methods.
-- none.

1.7.4. Data.
-- none.


1.8. TimeStepMatrix. ************************************

1.8.1. General Description.

In order to allow for different TimeSteps in different years, we will need this object to define the TimeSteps to be used during each year. Different time steps might be useful when the available data are aggregated differently over years. Early data sets may have data aggregated by month whereas future data sets may have data aggregated by week.

1.8.2. Services provided.
-- Supplies the main computing engine with the right TimeStep definitions to use during a given year.

1.8.3. Methods.
-- return TimeStep definitions given a year.

1.8.4. Data.
-- a matrix of TimeStep definitions.

6. Code Object Evaluation by Jim Norris, 4/03/97

This posting describes how model code objects will be defined and evaluated during the model design process. Our goal is to provide an organized framework for discussing the model details.

We will group objects into several general categories and provide separate postings for each category. For example:

-- time;
-- geographic;
-- salmon;
-- fishery;
-- harvest;
-- input;
-- output.

Each object within a category will have the following discussion sections:

-- general description;
-- services provided;
-- methods;
-- data.

If appropriate, additional sections will discuss how the object is used by other models:

-- PSC Chinook Model;
-- PSC Selective Fishery Model;
-- FRAM;
-- Proportional Migration (PM) Model.

At this design stage, we are particularly interested in receiving input from researchers familiar with other models with regard to the ability of the new model to incorporate the other model features. We want to identify and correct incompatibilities as soon as possible.

5. Main Engine Overview by Jim Norris, 4/01/97

This posting provides an overview of the draft main computing engine of the
model. This engine should be evaluated with respect to the model
vision.

We emphasize that this is a draft outline, and we encourage your comments.
By using this web discussion page, we hope to improve and speed-up the
design process by expanding the number of people engaged in the process.

Our objective is to create a computing engine that runs in chronological
order and is based on physical and biological processes. By making the
model process oriented, rather than data oriented, we hope to be able to
accommodate a wide variety of submodels to simulate each process. For many
processes, there are no existing submodels. That's OK. The model can just
cycle though those processes and do nothing. The important point at this
design stage is to create a code structure that will easily accomodate new
data and/or theory.

Below is a draft main computing engine (C++ code format), followed by a
brief narrative description of what computations occur at each stage.

**********************

Draft Main Computing Engine

void RunTheModel()
{
  for (int year = 0; year < Chronographer->nyears(); ++year){
    YearInit(year);

    for (int TimeStep = 0; TimeStep < Chronographer->nsteps(); ++ TimeStep){

    EnvironmentManager.update_physical_environment();
    EnvironmentManager.update_biological_environment();
    EnvironmentManager.age_cohorts();
    EnvironmentManager.take_natural_mortality();
    EnvironmentManager.grow_cohorts();
    FisheryManager.take_fishing_mortality();
    EnvironmentManager.spawn_cohorts();
    EnvironmentManager.migrate_cohorts();
    DataManager.timestep_wrapup();
    }

    DataManager.year_wrapup();

  }

}

**********************

Time Step Definition.

Within a given year, a period of time during which all fisheries within a
given region are assumed to be working on the same abundance of fish. That
is, fisheries in the same region do not interact with one another.

In mathematical terms, the catch equations for all fisheries in the same
region and time step will have the same abundance of fish (for each cohort)
as an independent variable. For example, if a time step is one week and two
fisheries are operating within the same region in that week (eg sport
fishery and commercial gillnet fishery), the catch equations for each
fishery might be something like:

      C1(i) = HR1 * N(i)

      C2(i) = HR2 * N(i)

where C1(i) and C2(i) are the catches of each fishery for cohort i, HR1 and
HR2 are the harvest rates for each fishery, and N(i) is the abundance of
cohort i during that time step in the given region.

We can generalize this definition to include the physical and biological
state of the system. That is, we assume that within a given time step the
physical and biological states of each area are constant. These non-salmon
variables can then be used by all biological processes as independent
variables.

Note that (1) time steps may be of any length; and (2) each year may have
different time step definitions.

We are still a bit fuzzy on how multiple time step iteration routines will
be controlled (see FisheryManager section below).

**********************

YearInit(year).

Gets all the data necessary for the current year computations. This could
include defining what the time steps are for this year.

**********************

EnvironmentManager.update_physical_environment().

Cycles through all geographic areas and updates the physical environmental
parameters associated with each area (eg SST, salinity, current speed,
current direction, river flow, river temp, etc). Different types of
geographic objects could have different parameters (eg ocean, river,
spawning area). All of these parameters are driving variables that are not
affected by the state of the system within this model. Other models could
be used to produce these driving variables.

**********************

EnvironmentManager.update_biological_environment().

Cycles through all geographic areas and updates the non-salmon biological
environment associated with each area (eg phytoplankton, zooplankton, prey
availability, predator abundance, disease outbreak, etc).

For the moment we can consider these as driving variables that are not
affected by the system. However, some of these variables (eg prey
availabiity) are clearly affected by the state of the system. We could
handle this by having a set of non-salmon biological state variables
associated with each area. Our TimeStep definition would have to include
the assumption that all salmon cohorts within a given geographic area
interact with the same biological state variables.

In other words, we must acknowledge that we are building a difference
equation model rather than one based on differential equations. Once the
non-salmon biological state variables are set for each area, they can be
used during other biological processes, such as survival and growth.

**********************

EnvironmentManager.age_cohorts().

Cycles through all cohorts and updates age, where age is now computed in
fractions of a year. This higher resolution age will be needed for some
biological processes, such as growth.

**********************

EnvironmentManager.take_natural_mortality().

Cycles through all cohort/area abundances and removes natural mortality.
The cycling could be by cohort then area, or area then cohort. Note that
the natural mortality function could include physical and non-salmon
biological parameters as independent variables.

**********************

EnvironmentManager.grow_cohorts().

Cycles through all cohort/area abundances and computes fish length
information. This could be simply a mean length, or could also include a
variance component. Again, the growth functions could include physical and
non-salmon biological parameters as independent variables.

**********************

FisheryManager.take_fishing_mortality().

We are still a little fuzzy on how this process will cycle and how the
objects will be organized. The main problem is dealing with the potential
need for iteration routines that cross time steps. For example, we may want
to simulate harvest allocation policies that have objectives that span
multiple time steps, such as equalizing Indian and Non-Indian harvests over
a season.

If we keep the same idea that we have now in CRiSP Harvest, this would
cycle through all types of harvesting policies (catch ceiling, fixed
harvest rate, etc). Each policy type would have a list of harvests that it
would traverse and act upon. We could use aggregated time and area objects
to set flags that define the complicated policies that require some
iteration, such as regular and multi-phase catch ceilings.

**********************

EnvironmentManager.spawn_cohorts().

This process will occur by areas, rather than stocks. That is, we will
define spawning area geographic objects. In the traditional model
configurations, there will be a one-to-one correspondence between stock and
spawning areas. However, by defining the spawning process by area rather
than stock, we gain the ability to allow for stock interactions (eg
straying, hatchery/wild interactions) and interactions with the physical
and biological state of the spawning area.

Thus, this process will cycle through all spawning areas (one for each
stock) and if fish are present, does something about spawning in that area.
In the most mechanistic formulation, the fish in the spawning areas could
produce eggs and then migrate into fish heaven at the end of the time step.
The number and quality of eggs produced could be a function of the physical
and biological state of the area. At the end of the year, the accumulated
eggs could be turned into fry or smolts during the wrapup process. A less
mechanistic option is just to accumulate spawners and then during the year
wrapup use something like a Ricker function to produce next year's smolts.

**********************

EnvironmentManager.migrate_cohorts().

Cycle through all cohorts and apply the migration matrices to get a new
abundance distribution. By having physical characteristics for each
geographic area we allow for the possibility of modifying the "average"
migration matrix by the physical environment (eg temp, ocean currents,
river flows).

The migration process will take the place of the maturation process in
previous models.

**********************

DataManager.timestep_wrapup().

Finish chores (eg spawning) and store appropriate data.

**********************

DataManager.year_wrapup().

Finish chores (eg spawning) and store appropriate data.

5.1. A couple comments by Tom Wainwright, 4/02/97


Just wanted to be the first to add some discussion.  The overall structure looks workable.  Time steps are defined with reference to fisheries, rather than biological events, which may be a problem when we start including freshwater phases such as juvenile migration or adult spawn-time distributions.  Perhaps the best we can do is pick the finest time scale that any of the processes being modeled require.  It also looks  like "EnvironmentManager" does everything but catch the fish.  While this is in keeping with our "ecosystem management" policies, from a practical standpoint it may make the code for this "class" a bit unwieldy if we start adding specific habitat processes as well as hatcheries (see Vision).

5.1.1. More TimeStep discussion by Jim Norris, 4/04/97

Tom makes a good point about the TimeStep definition. As stated in the draft, the definition is only related to fishing processes. For reference, here's the draft definition:

"Within a given year, a period of time during which all fisheries within a given region are assumed to be working on the same abundance of fish. That is, fisheries in the same region do not interact with one another."

We suggested the above definition because it is a natural extension of the definition used by most other "single pool" models. The main difference is that the new definition is made with respect to EACH REGION. Thus, within each TimeStep, instead of treating the stock as consisting of a coastwide "single pool," we assume the stock is composed of many "single pools," one for each geographic region.

Tom's point is that by defining TimeStep with respect to fishing processes, the definition may cause problems when we want to partition the computations with respect to biological processes.

Time is just time (with all respect to Einstein and Relativity Theory!)... and we can partition it however we want. The critical issue for the model is: What assumptions and restrictions are we going to make about how we model the processes that occur within a TimeStep? For example, we defined a TimeStep by the assumptions we wanted to make about the harvesting process.

We could adopt a more general definition:

"Within a given year, a TimeStep is a period of time during which all independent variables (eg input state variables and parameters) used in process equations are assumed to be fixed."

This definition removes any specific mention of fishing processes, or biological processes. It leaves open any assumptions about what types of equations we can use for the processes (fishing or biological) that occur within a time step. This is not a trivial point, because it may affect how we define other objects and structure the looping process in the main engine.

For example, the assumption that "all fisheries within a given region are ... working on the same abundance of fish" means that within a region and TimeStep, the typical linear catch equations can be computed independently and in any order. Thus, the definition was designed for computational convenience.

The new more general definition would permit more complicated relationships within a TimeStep (eg differential equations) that allow processes to interact. We then would need some type of object to group all the state variables that interact within a TimeStep.

Perhaps we should make a list of all the assumptions we want to make for a TimeStep. Below is a partial list of possible assumptions.

-- input state variables are fixed (eg cohort abundances at start of TimeStep);

-- environmental parameters are fixed throughout the TimeStep (eg average ocean/river temp; average river flow; average predator abundance);

-- fisheries are working on the same abundance of fish;

-- fisheries are allowed to interact with each other during a time step (eg commercial and sport fisheries often occur simultaneously during a week and they could be modeled by a set of donor controlled differential equations);

....

-- others????

4. Model Vision by Jim Norris, 4/01/97

This posting summarizes the vision for the NMFS salmon model. Note that these specifications refer to long-term goals. NOT ALL OF THESE SPECIFICATIONS CAN BE ACCOMPLISHED WITHIN THE TWO YEAR TIME FRAME OF THE CURRENT PROJECT. However, in order to develop a flexible code structure it is necessary to know the long-term vision for the model. Specific model specifications for the current project are yet to be determined.

******************

Purposes.

The model is intended to support evaluation of consequences of management
policies related to:

-- fisheries (short- and long-term consequences);

-- freshwater habitat;

-- artificial propagation.

**********************

Measures.

Evaluations are to be measured in terms of three types of risk/benefit:

-- extinction risk (the medium-term, up to 100 yrs, risk of loss of a
population, ESU, or species);

-- loss of biodiversity (within-species genetic and ecological diversity as
an indicator of the long-term adaptability of populations and species to
changing environments);

-- economic value (measured as the medium-term economic value of salmonid
resources, including harvest and non-harvest values).

**********************

Model Characteristics.

The model will have the following general characteristics:

-- multi-stock, multi-species (with the eventual goal to simultaneously
analyze all west coast stocks!);

-- metapopulation structure (explicitly incorporating migration among
breeding populations);

-- hatchery/wild interactions (incorporated in terms of gene flow,
ecological interactions, and harvest interactions);

-- stochastic population dynamics (including forcing by ocean and climate
variability);

-- Bayesian risk analysis paradigm (incorporating uncertainties in model
parameterization).

**********************

General Issues.

The following issues are likely to come up in some versions of the model:

-- species/stock interactions, incuding
   -- competition (joint carrying capacity);
   -- disease transmission;
   -- interbreeding (especially hatchery/wild);

-- genetic diversity (some measure of genotype);

-- fitness and natural selection;

-- predation (marine mammals, birds, other fish);

-- habitat dynamics;

-- habitat diversity;

-- effects of selective fisheries (ie. mass-marking of hatchery fish).

**********************

Model Implementation.

The model should have the following characteristics:

-- modular, open architecture so it is easy to reconfigure as new data,
theory, and management options arise;

-- GUI with geographic display;

-- database access tools built-in;

-- statistical procedures module;

-- habitat mediated (all interactions--including within and among stocks,
among stocks and environment, between policies and stocks--occur within a
habitat and re controlled by characteristics of the habitat).

2. Feb 21 Minutes by Jim Norris, 4/01/97

NMFS - UW Salmon Modeling Framework Review Committee

DATE: March 27, 1997
MEMORANDUM FOR: Salmon Model Review Group and interested parties
SUBJECT: Notes from Feb 21 1997 meeting NMFS Montlake Laboratory
FROM: Robert Kope, Tom Wainwright, Jim Anderson

CONTENTS

  1. project goals reviewed
  2. status of oversight committee
  3. scheduling
  4. communication and travel
  5. working groups
  6. data structure
    migration
    maturation
    calibration and optimization
    definitions
    model I/O
    model certification
    model comparison
    data
    documentation
  7. Jim Norris Report
  8. Rich Comstock Report
  9. People for workgroups
  10. Next meeting
  11. Distribution List
PROJECT GOALS REVIEWED

Kope - discussed background for the models

ESA requires consultation for any federal activity that may impact a listed species. The intent is to establish consistent jeopardy standards for ESA consultations and recovery planning. Two interests for models

  1. risk assessment for developing consistent standards for listing and recovery
  2. evaluating impact of harvest on listed stocks
The long-term goals of this project as stated before are:
  1. Provide a common framework for both conservation risk assessment and harvest management analysis.
  2. Incorporate life cycle (production) models for both species to evaluate harvest and conservation strategies.
  3. Incorporate formal risk-assessment methodologies.
  4. Allow flexibility to accommodate new methods and model designs.
  5. Provide an interface with the largest possible subset of the ocean and freshwater databases maintained by PSMFC.
  6. Expand the geographic scope of current harvest models.
  7. Link coho and chinook salmon harvest models.
Discussion noted we can divide the effort into a harvest model effort and a habitat model effort. The two efforts would eventually be joined but the oversight committee being formed is to address the issues of harvest modeling. The two models are:
  1. RAM (Risk Assessment Model) - A coastwide salmonid conservation and extinction risk assessment model. This model will apply the Bayesian risk assessment paradigm to multi-species and multi-population assessments in support of recovery and restoration planning for protected stocks. The model will eventually incorporate effects of harvest, hatchery, and habitat management as well as ocean variability and metapopulation structure.
  2. HAM (Harvest Assessment Model) - An integrated chinook and coho harvest model. This model will be directed toward assessing year-to-year management regulations and longer-term harvest management policies. It will incorporate spatial aspects of migration and harvest and explicitly include bycatch and non-landed mortality components.
STATUS OF THE OVERSIGHT COMMITTEE

As defined previously the Oversight Committee mainly will focus on the Harvest Assessment Model. The Oversight Committee responsibilities include

A. Specification activities:
set delivery dates of a management model
identify functionality for ESA and comanagement
assist in developing conceptual framework
identify input and output requirements
work on a regular, but informal basis, with development team
B. Certification activities
identify and provide data for calibration
data to certify that general model can be configured to FRAM, SFM, SSM and CRiSP2
evaluate sensitivity of model to define critical assumptions and implications
communicate results and characteristics of model to management community
use model in negotiation process
Legal issues of who can participate in the Oversight Committee are framed by the unfunded mandates reform act which exempts committees composed entirely of government representatives from the requirements of the Federal Advisory Committee Reform Act. This provides no basis for formal participation of the Canadian representatives, but they are encouraged to participate in an informal capacity.

Jim Scott suggested we have PSC as the oversight organization since it already has a certification process. He also added he likes the role of the guidance panel as outlined in the notes from the Dec 20 1996 meeting.

It was made clear that to obtain a model for management, the fish managers must be closely involved with all aspects of the model development.

SCHEDULING:

Replacement for FRAM is needed by 1998.

The new model will be modified to split groups in to marked and unmarked, tagged and untagged groups. The model will be able to deal with groups both individually and together.

In developing a general coastwide model we look at the currently available models. We are developing a general structure that should be applicable for a multispecies-multistock analyses.

Our short-term goal is to have a prototype model running by the end of Sep 97. By this we mean a generalized model that can be configured to duplicate the output of one of the existing models, such as the PSC chinook model. For the longer-term our goal is to have a model that can be used to evaluate selective fisheries for the 1998 season. That would require having a model certified sometime in Feb 98.

COMMUNICATION AND TRAVEL:

The issues of communication of the Oversight Committee and the informal subgroups was addressed. It was suggested that PSFMC might assist in this supporting travel for the Oversight Committee and any subgroups.

A WEB page will be used to exchange information between the subgroups group the Oversight Committee and the model development team. A Web based list server has been implement and can be used to organize communications on specific topics see the harvest web page:

"Comments can be added to the Web using the hypertext notes at" http://www.cqs.washington.edu/harvest

The web list server could eventually include a salmon management web group that will connect all salmon harvest management activities in the region. Software to develop a community web page has recently become available (www.throw.com). This might have value to our model development and salmon management in general. Anderson will review this and see if its worth implementing

WORKING GROUPS:

We identified the need for informal working groups to address specific issues. Currently identified issues are:

Model structure: To develop the model code structure

Migration algorithms: To identify the types of algorithms for ocean migration

Maturation algorithms: To formulate maturation rates in terms of fish migration to rivers

Calibration and optimization: To define procedures for calibration model with CWT and fishing effort data. Identify algorithms to find optimum harvest strategies within the model

Definitions: to standardize definitions between models

Model I/O: To define input-output requirements for harvest management

Certification: develop a process to certify a model for management

Model Comparison: compare existing models to the new model Data: Obtain data for comparison of existing models to the new model

Documentation: to be developed by UW.

Risk Assessment: to identify risk assessment model function from the recommendations of the risk assessment workshop panel report (anticipated to be available in March 1997).

Notes on the issues facing the working groups are discussed below

MODEL DATA STRUCTURE:

Progress on developing model structure was presented by Jim Norris

The model structure will

  1. synthesize all existing models into a general format based on processes rather than data structures;
  2. allow for different algorithms for each model process;
  3. provide a flexible calibration procedure;
  4. track cohorts (eg marking and tagging status) separately.
MIGRATION:

A work group for migration model development:

A first task is to collect literature. This should include a literature survey of the migration models available. This will be collected and distributed to the groups as a series of papers. Jim Anderson will employ a student (David Caccia) to put the literature together on migration.

MATURATION:

The approach would be to express maturation as an age specific migration rate into the river

Kope has algorithm to express maturation function in migration framework

Johnson concerned about maturation to migration basis framework

Scott noted it may be difficult to calibrate a migration-maturation structure with CWT

Noted: age of maturation changes from year to year and may relate to ocean distribution of fish as related to ocean circulation and fish growth

CALIBRATION AND OPTIMIZATION:

Model calibration will be considerably involved and will be considered at a later meeting. We anticipate creating an informal model calibration subgroup that will address the issued in detail.

The discussion of "optimization" occured in several contexts. One was how to calibrate the model. Jim Scott remarked that his staff advised that we should use optimization routines to find the set of free parameters (ie calibrated parameters) that give the best fit of the model outputs to corresponding observed data.

A second context was in the basic model computation enine. Here the problem is that of finding single year harvest levels that satisfy complex policy objectives, such as equalizing Indian and Non-Indian catches, meeting minimum escapement goals, allocating catches within Indian and Non-Indian fisheries. These types of optimization algorithms would run during simulation years as part of the computation engine. Someone observed that there may be multiple solutions and that it will be difficult to identify these and select between them during within year computations.

A third context was that of designing code that "optimizes" overall computation speed.

DEFINITIONS:

Blair Holtby noted model specifications are need to define equivalence of terminology between the new and existing models. This led to a specific task to identify common nomenclature for the models. A table will be developed to identify equivalent model terms and parameters. Lead on this was Jim Packer 902- 2754. Jim Scott and Larry Lavoy will assist. The goal is to have a table ready for the next meeting.

MODEL I/O:

Input Output specification must be identified by managers using the model. To do this we will first spend time with the FRAM model to understanding its output and how this meets the needs of managers.

Blair Holtby suggested a model needs risk assessment output specifications including:

(short term risk = comanagement issue) risk of not achieving an escapement foal with in a seasons

(long term risk = ESA issue) risk of long term extinction

MODEL CERTIFICATION:

The harvest assessment model will need to be certified for use by fisheries managers. A certified model would be used by the formal fisheries technical committees responsible for making recommendations for harvest management. These committees generally will not have time to evaluate a new model. In this case our task is to provide enough information so the technical teams can understand the steps we have made to validate our model for management. The Oversight Committee might take this function of certifying the model. The certification will include model documentation and comparison to existing models with a common data set.

Membership in the certification group.

The certification is likely to be a PFMC and PSC process. Our model Oversight Committee can help idenfity how we can assist these groups in model certification.

It was noted that Canada has similar issues upcoming and they wish to participate in our process to the degree possible.

MODEL COMPARISON:

An important step in certification is to compare the model to the existing models. This will involve a subgroup to run the existing models and compare them to the new model.

The new model will be compared to FRAM

Pete Lawson suggested we need exact equivalence of the new model to the existing models. This may or may not be entirely possible. If a one-to-one equivalence is not possible the Oversight Committee will need to resolve the issue if differences in the model are significant and if changes need to be made to resolve differences.

Need to work with and through the technical advisory committees for the various fishery management group

FRAM is the only existing model for coastwide coho

PSC or CRiSP2 model is the only existing coastwide chinook model

DATA:

Comstock distributed three data maps to be used for comparison of models:

fish (catch sampling effort stock), age, time period. Note we may have catch and fish sampling different. Need to have one set of definitions for the data reconstruction and comparison.

Rich Comstock and Rick Moore: can reconstruct data for comparison of the models in three time steps.

Rich Comstock will work with Norma Sands, Rick Moore and Peter Lawson the week after the meeting. Was this done? what was accomplished?

DOCUMENTATION:

A manual theory, calibration and validation: UW will take the lead in model documentation with assistance of NMFS and the Oversight Committee.

Develop an executive summary showing main issues of the documentation. This would give fisheries managers a quick overview of the model.

The CRiSP 2 manual is currently being updated. This can be used as a framework for the new manual.

JIM NORRIS REPORT ON MODEL DEVELOPMENT

On the issue of fish migration in the model

A migration matrix will be computed outside the model

The matrix will be stock and year specific. The Newman one dimensional migration model is one way to define migration but we can define other methods that allow branching of the linear model. A third way is to represent migration by spatial boxes. A fourth way is to have a 2-dimension diffusion model.

All migration models are likely to use CWT and effort data. A question was asked, is there another or better way to estimate migration. It is also possible to incorporate a numerical simulation model such as NERKAsim

Organization of the data structure follows the convention

REGION Number -> Fishery unit -> Harvest Object
Time Step: A period of time during which it is assumed that all fisheries within a given geographic region are operating on the same abundance of fish from each cohort.

The mathematical implication is that the independent variables used within the harvest functions are the same for all fisheries within the time step and region.

TASKS PRIOR TO NEXT MEETING

Objects definition meeting

Comparison of terminology in the different models

Tasks by the model development group

Norris: develop object structure for model

Norris: identify nomenclature of model terms

Scott: consider issues of optimization routines

RICH COMSTOCK PRESENTATION ON DATA

Rich Comstock described his data for model comparison including

CWT recovery

catch and effort data

identifying statistical catch areas

Rich is working with Ken Newman on a frequent basis

Question- What will be done with the PM model. It has a heuristic migration model. The PM model has been converted to the PYTHON objected oriented code, which is similar to C++.

PEOPLE MENTIONED FOR WORK GROUPS

  1. model nomenclature table - 3 Jims and Packard
  2. conceptual framework objects - Jim Norris, Troy Frever and Rich Comstock
  3. model functionality for fisheries management this included needed I/O: Scott, Dan Bottom (?), Doug Gill
  4. calibration issues: Ken Newman. Rich. Comstock.
  5. literature on migration and climate ecology: Jim Anderson will find a student
  6. list server and web page issues: Anderson and staff
NEXT MEETING APRIL 18 at NMFS Montlake Lab

DISTRIBUTION LIST

Jim Anderson  UW 
Robert Bayley  NMFS 
Jim Berkson  CRITFC 
Rich Comstock  USFWS 
Carrie Cook-Tabor  USFWS/NMFS 
Judy Cress  UW 
Rich Dixon  CDFG
Peter Dygert  NMFS 
John Geibel  CDFG
Brent Hargreaves  DFO 
Ken Henry  NMFS, REFM
Blair Holtby  DFO 
Ron Kadowaki DFO 
Jeff Koenigs  ADFG
Robert Kope  NMFS 
Pete Lawson  ODFW/NMFS 
Steve Lindley  NMFS
Richard Methot  NMFS 
Marianne McClure  CRITFC 
Rod McInnis  NMFS
Rick Moore  WDFW 
Gary Morishima  QIN 
Ken Newman  U of Idaho 
Jim Norris  UW 
Mike Prager  NMFS
Bill Robinson  NMFS 
Norma Jean Sands  ADFG 
Jim Scott  NWIFC 
Steve Smith  NMFS 
Dan Viele  NMFS
Tom Wainwright  NMFS 
Brian Riddel Department of Fisheries and Oceans, Nanaimo BC V9R 5K6

Those who don't have email address on this list will each receive a hard copy in the mail.

1. Dec 20 Minutes by Jim Norris, 4/01/97

NMFS-UW Salmon Modeling Framework
Review Committee 12-20-96

Contents
Attending List
NMFS Expectations
State-Tribal Expectations
Relationship between Research & Management Models
Work by CQS/UI to Date
Model Timeline & Committee Structure
Model Evalution Frame Work
J.N.'s Outline for a Harvest Model
Distribution List

ATTENDING LIST
Jim Anderson  UW 
Jim Berkson  CRITFC 
Rich Comstock  USFWS 
Carrie Cook-Tabor  USFWS/NMFS 
Judy Cress  UW 
Peter Dygert  NMFS 
Robert Kope  NMFS 
Pete Lawson  ODFW/NMFS 
Richard Methot  NMFS 
Marianne McClure  CRITFC
Rick Moore  WDFW 
Gary Morishima  QIN 
Ken Newman  U of Idaho 
Jim Norris  UW 
Jim Scott  NWIFC 
Tom Wainwright  NMFS 

NMFS EXPECTATIONS

Robert Kope (NMFS - NWFSC) outlined the rationale and expectation of NMFS for this project. The National Marine Fisheries Service (NMFS) has responsibility for administering the US Endangered Species Act (ESA) for anadromous fish and shared responsibility for managing ocean salmon fisheries. Because of this dual responsibility, NMFS has a need to evaluate the constraints of ESA considerations on harvest management, and the impacts of harvest and habitat management on the recovery of depleted stocks. These evaluations need to be made on a coastwide basis and should be internally consistent and consistent with models used in other management arenas.

The models necessary to make these evaluations will have many similarities. They will utilize the same catch and escapement data, and incorporate the same stocks and fisheries. We would like to be able to accomplish these objectives within a single broad modeling framework to ensure as much consistency between harvest and conservation models, and reduce redundancy as much as possible. We believe the most rational way to approach this is to link harvest models for chinook and coho salmon with life-cycle models for these species. This coupling will allow evaluation of harvest impacts within a single year, as well as evaluation of management strategies over the long run. We would also like to explicitly incorporate migration as a means of allowing greater flexibility in modeling fishing patterns that depart from historic conditions and in calibrating models to recent years with zero harvests in some areas.

The goals of this modeling project are to develop a modeling framework to:

  1. Provide a common framework for both conservation risk assessment and harvest management analysis.
  2. 1000 Expand the geographic scope of current harvest models
  3. link coho and chinook salmon harvest models.
  4. incorporate life cycle (production) models for both species to evaluate harvest and conservation strategies.
  5. allow flexibility to accommodate new methods and model designs.
  6. provide an interface with a large subset of ocean and freshwater databases maintained by PSMFC
Ideally the harvest models should be capable for running of a single year to evaluate regulations or run in a simulation mode to evaluate harvest and restoration management strategies over multiyear horizons. The model that presently comes closest to meeting these needs is the CRiSP harvest model developed by UW - CQS, with a C++ implementation of the PSC chinook model as its engine and a GUI.

To begin this effort of model consolidation, the NMFS NWFSC has initiated a modeling project in cooperation with the University of Washington, Center for Quantitative Science. We have convened this committee to provide advice on the specification and design of the model framework in order to maximize the compatibility of this modeling effort with the needs and modeling efforts of the co-managers.

STATE-TRIBAL EXPECTATIONS

Jim Scott (NWIFC), Jim Berkson (CRITFC), and Gary Morishima (QIN) all expressed concerns that: 1) the role of this advisory committee needs to be clearly defined in terms of the responsibility of the committee and the scope of their input into the model development process, 2) The committee needs to have more than an advisory role or the model will not be accepted and used by the co-managers.

Gary Morishima added that the end product needs to be clearly defined in order for the co-managers to decide whether it is more worth while to invest their time in participating in this process, or to invest their time and energy in other modeling enterprises. The most urgent need, at the present time, is to develop a consensus model to evaluate selective fisheries for mass marked hatchery coho salmon for use in the management process for 1998 fisheries. He reiterated that there needs to be a clear understanding of the role of this committee in the designing and implementing any model in order for the participants to commit to the project.

Editors notes
 
 

[ Jim Scott, suggested in an earlier email to Ken Newman, that the role of the committee should be to "oversee development and implementation of the model". The purposes of the committee are twofold:

1) With the assistance of the team, develop specifications for model processes, resolution, input screens, and output reports.

2) Assist the team in the collection and analysis of data, testing of computer code, and implementation of the model.

Scott also indicated:

"I want to assure that the model is used in the future by the states, tribes, and NMFS. The best way to assure this is to have representatives of each of these agencies develop the specifications, assists in data analysis, etc. This may initially slow model development, but any other course will result in the development of an alternative model by the tribes and states."

For the committee to serve this purpose, "periodic meetings (once every 6 months, say)" will probably not be sufficient. Perhaps you already anticipated this, but key (perhaps all) members of the committee will require at least weekly contact. Monthly progress reports may be sufficient for the remainder of the committee. ]

RELATIONSHIP BETWEEN RESEARCH AND MANAGEMENT MODELS

Tom Wainwright and Jim Anderson discussed the distinction between the research/development phase of modeling and production of a particular model for direct management analysis. Jim Anderson noted the need for a formal process for pulling a specific model version out of the research stream into the management context.

To elaborate this two stream process consider that research model development occurs in small incremental steps. Change 1000 s in the model are typically designated by subnumbers (example 1.1, 1.2) and the models remain in the beta test phase and are not formally released. Ideally research model should encompass the best available ecological and statistical approaches and serve as a neutral analytical tool. The research models can explore ecological interactions through sensitivity analysis, identify needed research and assess calibration approaches.

The Management models are formally released for a specific management purpose and they evolve from a particular version from the research model stream. Typically releases coincide with the yearly need to set harvest policy. Management models are part of a negotiation process and thus require some type of certification. This may involve model calibration, validation, documentation and finally presentation to the management agencies. Management model should serve as a quantitative expectation of a negotiated agreement between management agencies.

Editors Notes

[ The advisory committee should take an active role in the communication between these two streams of model development. Two distinct activities are envisioned in the figure below.

A) Specifications: the committee specifies model requirements and helps identify research priorities and schedules to meet the needs of management, Specification activities occur as needed.

B) Certification: the committee certifies a version of the research model for use in management. Certification activities take place within specified dates.

 
       Time     Research Model          Management Model
                versions                versions and actions 
                
        |       | 1.0 <------A--------  manager specification   
        |       | 1.1                   
        |       | 1.2                   
       year1    | 1.3 -------B------- |Version 1.3 certification
        |       |       
        |       | 2.1 <------A--------  manager specification   
        |       |                       
       year2    | 2.2 -------B------- |Version 2.2 certification        
        |       |       
        v       v
Proposed tasks and responsibilities of the committee for specification and certification are outlined below.
A. Specification activities:
- committee indicate delivery dates of management models
- identify functionality for ESA and co-management
- assist in developing conceptual framework
- identify input and output requirements
- work on a regular, but informal basis, with development team
B. Certification activities
- identify and provide data fo 1000 r calibration
- evaluate sensitivity of model to define critical assumptions and implications
- communicate results and characteristics of model to management community
- use model in negotiation process
What is not resolved is how different management goals, ESA and co-management, would interact within this framework. As was proposed, two research model streams could be implemented with each one attached to a different management stream. An alternative is to merge the modeling streams or set up a semi-formal comparison of the model. Our current tack seems to be to compare the models and at a later data (soon though) determine if we can merge the modeling efforts. In any case these issues are worth articulating further. ]
WORK BY CQS/UI TO DATE

Jim Norris (UW - CQS) said that he thinks we can have a coho selective fisheries model operational in this framework within 2 years.

Jim Norris outlined the model specification and design process and the flow of software development. He stressed that the common approach to software development used to be for an initial analysis of the problem to produce a functional specification, then the design phase to produce a design document, programmers to develop the code, and a test phase to result in a deliverable product. If the deliverable product failed to meet original need, the entire process was reiterated. Current model design follows the same general path with feedback at each level in order to avoid the need to iterate over more than one level in the development process. See figure below.

 
        Software Development Process
 
          Analysis  -----  Functional Specification
           |  ^
           |  |
           v  |
          Design    -----  Design Document
           |  ^
           |  |
           v  |
          Code      -----  Alpha Application
           |  ^
           |  |
           v  |
           Test      -----  Deliverable
         
        Goal: Avoid iterating over more than one level.
He also pointed out the need to decouple the research aspect of modeling from the management aspect. Through research, models are constantly developing and changing. For management purposes, everyone needs to be working from the same page, so a model version must be spun off from the research phase and held fixed for a management cycle.

Jim Norris then presented a demonstration of the current capabilities of the CRiSP Harvest model. The most recent version of the PC model for Windows NT or Windows 95 can be downloaded from the CQS Website (/crisp/crisp2pc.html).

CRiSP.2 is a user-friendly computer model that simulates the harvest of 30 chinook salmon stocks by 25 fisheries over an extended time horizon. The geographic range covered by the model extends from Southeast Alaska to the Oregon coast. Ten stocks and two fisheries from the Columbia River basin are included in the model. The computational engine of CRiSP.2 is based upon the forecasting portion of the Pacific Salmon Commission (PSC) Chinook Model.

(see Jim Norris' Outlin 1000 e for a Harvest Model at end of text)

A key feature of the model is the interaction between stocks through annual catch ceilings, or quotas, imposed upon fisheries that harvest multiple stocks. Catch ceilings are the primary PSC management tool. As stocks rebuild or decline at different rates over time, relative harvest rates in fisheries with catch ceilings also change. Single stock models cannot simulate this type of interaction.

Gary Morishima suggested that it is difficult to keep track of the changes you have made to parameters when you are running the CRiSP harvest model in its interactive mode. It would be helpful to be able to save a listing of the parameter values used in a particular run along with the output from that run.

Jim Norris responded that he is aware of this problem and has been exploring two ways of addressing it. The first is have a "save" utility that saves the current model parameters in folder that can be named by the operator. The second is to create a "log file" that keeps a continuous record of all parameter changes during a work session. He intends for the new model to have one or both of these capabilities.

Jim also responded that the input files are all text files and the model can be run in a batch mode with multiple sets of input files without using the GUI. This allows you to make multiple runs faster and to associate the output with the input files.

Jim also expressed the view that the most difficult part of the model to code will be the ability to simulate complex management scenarios with multiple objectives over multiple time steps and fisheries.

After lunch, Ken Newman talked about the state space approach he is investigating to explicitly model migration of coho salmon. He explained the basic structure of the state space model and the algorithms he is using to describe the initial distribution of fish and to move them around. Details are available at the CQS Website. He is currently using maximum likelihood criteria to fit the model parameters with a Kalman Filter algorithm to calculate the likelihood function. He is currently working with coded wire tag data for 3 cohorts of coho from Grays Harbor and using a 1 dimensional migration model with 13 area cells for migration and fishing.

Jim Scott pointed out the problems of scaling models for single populations to account for total catch. When single populations are modeled individually, and catches for individual stocks are added to get total catch in a port area, there will be discrepancies between the data and the model results. We should consider ways to fit the modeled catches from individual stocks to the total catch data simultaneously.

MODEL TIMELINE AND COMMITTEE STRUCTURE

This discussion focused on two fundamental questions: the goals of the project and the time-frame for completing goals. The project has a long-term goal of developing a coastwide salmon model framework that can be used for both conservation risk assessment and harvest management. The first year focus is on developing software infrastructure (data structures, user interface, database interface, etc.) and to develop a prototype coho harvest model as a first application.

Jim Norris said that he believes we can have a mechanistic model to move fish around in multiple timesteps by September of 1997. This would not be a complete working model, and would probably not include all stocks of interest.

Gary Morishima pointed out that the immediate need is to have a model for selective coho fisheries ready for the management process for the 1998 season. In order to be useful this would need to be thoroughly reviewed by October or November of 1997.

This time frame is not consistent with the current project scope, but it may be possible to have something ready for the 1999 season.

The committee recommended the following goals:

  1. The short-term focus of the model framework should be selective fisheries evaluation.
  2. The framework may be useful to compare the FRAM and PM models with the State 1000 Space model.
Jim Scott sees the framework as a useful platform to compare the 3 models as 3 alternative methods within the same model framework.

Gary Morishima pointed out that there is another selective fisheries model developed in the PSC process. Jim Scott characterized the PSC model as analogous to FRAM with different parameterization and estimation algorithms. He sees the comparison with the state space approach as problematic because it is unclear how the SS will mesh with the spatial complexity of Puget Sound.

Rich Comstock recommended the coho cohort reconstruction model (MSM) output as a useful test data set for the comparison of the different model algorithms.

MODEL EVALUATION FRAME WORK

For evaluation five (4?) models are identified:

PM: proportional migration (PM) selective fishery model. Authors Moore, Lawson and Comstock June 29 1996
SS: state space approach model, Author Newman
FRAM: fishery Regulation Assessment Model, Authors Washington and Oregon
SFM: PSC Selective Fishery Model. Author Moore May 26 1995.
MSM: Mixed Stock Model,
CRiSP: Author Norris (?)
We require:
General descriptions of each model
Compare theory of models: can they be reduced to a single mathematical structure
Identify how models are aggregated across stocks for comparison (apples to apples)
Identify use common data for comparison
We agreed to:
  1. get out a report of this meeting and an outline of a model evaluation framework by mid January for inclusion on the web site /harvest/
  2. circulate a draft statement of project scope and description of the role of this committee by mid-January for comment by attendees
  3. compile and make available a set of documentation of the selective fisheries models and test dataset

  4. The selective fishery model is available on the web site and can be obtained as a WordPerfect file.

    (Comstock's test data will include coho production and expansion factors. This will be posted on the web site.)

  5. Complete a comparison of the analytic structures of the FRAM, Proportional Migration, and Newman's migration models by mid-March.
  6. use the CQS Website (/harvest/) for information exchange
  7. meet again at the Montlake Lab on Friday, February 21.
  8. distribute an agenda and description of the role of this committee in development and implementation of the modeling framework prior to the next meeting.

JIM NORRIS' OUTLINE FOR A HARVEST MODEL

Draft 12/19/96 Draft

  1. General Model Requirements
    1. Flexible time steps within a year.
    2. Ability to have multiple species (e.g., coho and chinook).
    3. Ability to have multiple stocks within species.
    4. Ability to have multiple cohorts within a stock (marked/unmarked).
    5. Ability to have multiple fisheries.
    6. Ability to have management constraints (e.g. catch ceilings, catch allocations between fisheries). For each fishery, ability to define and simulate fishing patterns that are different from historical patterns.
    7. Ability to incorporate population genetics models.
  2. Basic Idea to Handle Migration
    1. Divide the entire coast into high resolution (i.e. small) discrete geographic units (call these statistical areas).
      • Assign weighting factors to each stat area (e.g. based on relative size).
    2. Define stock migration areas as aggregations of statistical areas.
    3. Define fishery harvest areas as aggregations of statistical areas.
    4. When estimating migration patterns for individual st 1000 ocks each year, there will be a one to one correspondence between stock migration areas and fishery harvest areas.
      • This is necessary because the migration patterns are estimated from CWT recovery data by fisheries.
    5. In order to simulate the effects of alternative fishing patterns, we need a mechanism that will decouple the stock migration areas from the fishery harvest areas.
      • To do this we allocate stock abundances and fishing efforts to individual statistical areas within stock migration areas and fishery harvest areas (using the weighting factors).
      • Harvests and abundances are always computed and tracked at the statistical area level.
      • Harvests and abundances can be reported by statistical area, migration area, or harvest area.
  3. Each Simulation
    1. Define high resolution statistical (geographic) areas.
      • These will be the high resolution statistical areas used for CWT recoveries.
      • Each area will have a weighting factor (e.g., by relative size).
      • These areas can be combined to form stock migration areas and fishery harvest areas.
      • Stock abundances within a stock migration area will be distributed to individual statistical areas by the weighting factors.
    2. Define stocks.
      • Name.
      • Initial abundances.
      • Stock migration areas as collections of statistical areas.
      • Migration parameters.
      • Production parameters.
      • General info.
        • Spawning locations/descriptions
        • CWT groups
        • Photos
    3. Define fisheries.
      • Name.
      • Gear type.
      • Catchability coefficients.
      • Fishery harvest areas as collections of statistical areas.
      • General info.
        • Description.
        • Photos.
  4. Each Time Step
    1. For each stock.
      • Determine abundance by age and migration area. Get the transition matrix for this year, stock, age, and time step. Transition matrices will be based on SSM results. Transition matrices may be computed a priori, at start-up, or at the start of each time step. Apply transition matrix to abundance vector.
      • Determine abundance by age and statistical area. Distribute abundance within each stock migration area to each stat area.
      • Determine abundance by age and fishery area. Re-aggregate abundances by fishery area. In most cases, the migration area and fishery areas will be the same each year. However, it may be desirable to hind cast the effects of a different fishing regime over an underlying migration pattern.
    2. For each fishery area.
      • Determine fishing effort by stat area. Distribute fishing effort from fishery area to stat areas.
      • For each stock. Determine legal harvests by age and stat area. Determine incidental mortalities by age and stat area.
    3. For each stock.
      • Determine final abundances by migration area. Aggregate harvest and incidental morts by migration area. Apply natural or other mortalities.
DISTRIBUTION LIST
Jim Anderson  UW 
Jim Berkson  CRITFC 
Rich Comstock  USFWS 
Carrie Cook-Tabor  USFWS/NMFS 
Judy Cress  UW 
Peter Dygert  NMFS 
Robert Kope  NMFS 
Pete Lawson  ODFW/NMFS 
Richard Methot  NMFS 
Marianne McClure  CRITFC 
Rick Moore  WDFW 
Gary Morishima  QIN 
Ken Newman  U o 771 f Idaho 
Jim Norris  UW 
Jim Scott  NWIFC 
Tom Wainwright  NMFS 
Ken Henry  NMFS, REFM
Steve Lindley  NMFS
Mike Prager  NMFS
Rod McInnis  NMFS
Dan Viele  NMFS
Rich Dixon  CDFG
John Geibel  CDFG
Jeff Koenigs  ADFG
Ron Kadowaki Department of Fisheries and Oceans, Nanaimo BC V9R 5K6 
Brian Riddel Department of Fisheries and Oceans, Nanaimo BC V9R 5K6

Those who don't have email address on this list will each receive a hard copy in the mail.

Web Address: http://www.cbr.washington/harvest/