Autodocs for `Grid Engine SCHED Library'


Next: , Up: (dir)


1 SCHEDD


1.1 order_remove_immediate

NAME
          order_remove_immediate() -- add a remove order for the job task

SYNOPSIS
          int order_remove_immediate(lListElem *job, lListElem *ja_task,

FUNCTION
          Generates an order of type ORT_remove_immediate_job for the given job
          task.

INPUTS
          lListElem *job       - The job to remove (JB_Type)
          lListElem *ja_task   - The task to remove (JAT_Type)
          order_t *orders      - The order structure will be extended by one del order

RESULT
          int - Error code: 0 = OK, 1 = Errors

NOTES
          MT-NOTE: order_remove_immediate() is MT safe


1.2 order_remove_order_and_immediate

NAME
          order_remove_order_and_immediate() -- add a remove order for the job task

SYNOPSIS
          int order_remove_order_and_immediate(lListElem *job, lListElem *ja_task,

FUNCTION
          Generates an order of type ORT_remove_immediate_job for the given job
          task.  Also removes the ORT_start_job order for this task from the order
          list.

INPUTS
          lListElem *job       - The job to remove  (JB_Type)
          lListElem *ja_task   - The task to remove (JAT_Type)
          order_t *orders      - The order structurestructure  for this scheduler pass be removed

RESULT
          int - Error code: 0 = OK, 1 = Errors

NOTES
          MT-NOTE: order_remove_order_and_immediate() is MT safe


1.3 remove_immediate_job

NAME
          remove_immediate_job() -- test for and remove immediate job which can't
                                     be scheduled

SYNOPSIS
          int remove_immediate_job(lList *job_list, lListElem *job, order_t *orders,

FUNCTION
          Removes immediate jobs which cannot be scheduled from the given job list.
          This is done by generating an order of type ORT_remove_immediate_job.  If
          remove_orders is set, the ORT_start_job orders are first removed from the
          order list before adding the remove order.

INPUTS
          lList     *job_list     - The list of jobs from which the job should be
                                    removed (JB_Type)
          lListElem *job          - The job to remove (JB_Type)
          order_t *orders         - The order structure for this scheduler pass
          int       remove_orders - Whether the ORT_start_job orders should also be
                                    be removed

NOTES
          MT-NOTE: remove_immediate_job() is MT safe


1.4 remove_immediate_jobs

NAME
          remove_immediate_jobs() -- test for and remove immediate jobs which can't
                                     be scheduled

SYNOPSIS
          int remove_immediate_jobs(lList *pending_job_list,

FUNCTION
          Goes through all jobs in the pending list to see if any are immediate and
          not idle.  If any are, they are removed.  This is done by generating an
          order of type ORT_remove_immediate_job.  If any array jobs are removed,
          the running list is checked for tasks belonging to the job, which are
          also removed.  This is done by removing the ORT_start_job orders and
          adding an order of type ORT_remove_immediate_job.

INPUTS
          lList *pending_job_list   - The list of pending jobs for this scheduler
                                      pass (JB_Type)
          lList *running_job_list   - The list of running jobs for this scheduler
                                      pass (JB_Type)
          order_t *orders           - The order structure for this scheduler pass

RESULT
          int - Error code: 0 = OK, 1 = Errors -- always returns 0

NOTES
          MT-NOTE: remove_immediate_jobs() is MT safe


2 SERF


2.1 -SERF_Implementation

NAME
          SERF_Implementation -- Functions that implement a generic schedule
                                  entry recording facility (SERF)

SEE ALSO


2.2 -SERF_Interface

NAME
          SERF -- Schedule entry recording facility

FUNCTION
          The enlisted functions below allow for plugging in any module that
          records schedule entries be used that registers through sge_serf_init()
          the following methods:
          
             typedef void (*record_schedule_entry_func_t)(
                u_long32 job_id,
                u_long32 ja_taskid,
                const char *state,
                u_long32 start_time,
                u_long32 end_time,
                char level_char,
                const char *object_name,
                const char *name,
                double utilization);
          
             typedef void (*new_schedule_func_t)(u_long32 time);

SEE ALSO


3 sched


3.1 select_queue


3.1.1 parallel_global_slots

NAME
          parallel_global_slots() --

RESULT
          dispatch_t -  0 ok got an assignment + set time for DISPATCH_TIME_QUEUE_END
                        1 no assignment at the specified time
                       -1 assignment will never be possible for all jobs of that category


3.1.2 parallel_queue_slots

NAME
          parallel_queue_slots() --

RESULT
          int - 0 ok got an assignment + set time for DISPATCH_TIME_NOW and
                  DISPATCH_TIME_QUEUE_END (only with fixed_slot equals true)
                1 no assignment at the specified time
               -1 assignment will never be possible for all jobs of that category


3.1.3 sequential_global_time

NAME
          sequential_global_time() --

RESULT
          int - 0 ok got an assignment + set time for DISPATCH_TIME_QUEUE_END
                1 no assignment at the specified time
               -1 assignment will never be possible for all jobs of that category


3.1.4 sequential_queue_time

NAME
          sequential_queue_time() --

RESULT
          dispatch_t - 0 ok got an assignment + set time for DISPATCH_TIME_NOW and
                         DISPATCH_TIME_QUEUE_END (only with fixed_slot equals true)
                       1 no assignment at the specified time
                      -1 assignment will never be possible for all jobs of that category


3.2 sge_job_schedd


3.2.1 SPLIT_-Constants

NAME
          SPLIT_-Constants -- Constants used for split_jobs()

SYNOPSIS
          enum {
             SPLIT_FIRST,
             SPLIT_PENDING = SPLIT_FIRST,
             SPLIT_PENDING_EXCLUDED,
             SPLIT_PENDING_EXCLUDED_INSTANCES,
             SPLIT_SUSPENDED,
             SPLIT_WAITING_DUE_TO_PREDECESSOR,
             SPLIT_HOLD,
             SPLIT_ERROR,
             SPLIT_WAITING_DUE_TO_TIME,
             SPLIT_RUNNING,
             SPLIT_FINISHED,
             SPLIT_LAST
          };

FUNCTION
          SPLIT_PENDING     - Pending jobs/tasks which may be dispatched
          SPLIT_PENDING_EXCLUDED     - Pending jobs/tasks which won't
                              be dispatched because this whould exceed
                              'max_u_jobs'
          SPLIT_PENDING_EXCLUDED_INSTANCES    - Pending jobs/tasks which
                              won't be dispatched because this whould
                              exceed 'max_aj_instances'
          SPLIT_SUSPENDED   - Suspended jobs/tasks
          SPLIT_WAITING_DUE_TO_PREDECESSOR    - Jobs/Tasks waiting for
                              others to finish
          SPLIT_HOLD        - Jobs/Tasks in user/operator/system hold
          SPLIT_ERROR       - Jobs/Tasks which are in error state
          SPLIT_WAITING_DUE_TO_TIME  - These jobs/tasks are not
                              dispatched because start time is in future
          SPLIT_RUNNING     - These Jobs/Tasks won't be dispatched
                              because they are already running
          SPLIT_FINISHED    - Already finished jobs/tasks
          
          SPLIT_NOT_STARTED - jobs that could not be dispatched in one scheduling
                              run
          
          SPLIT_FIRST and SPLIT_LAST might be used to build loops.

SEE ALSO


3.2.2 get_name_of_split_value

NAME
          get_name_of_split_value() -- Constant to name transformation

SYNOPSIS
          const char* get_name_of_split_value(int value)

FUNCTION
          This function transforms a constant value in its internal
          name. (Used for debug output)

INPUTS
          int value - SPLIT_-Constant

RESULT
          const char* - string representation of 'value'

SEE ALSO


3.2.3 job_get_duration

NAME
          job_get_duration() -- Determine a jobs runtime duration

SYNOPSIS
          bool job_get_duration(u_long32 *duration, const lListElem *jep)

FUNCTION
          The minimum of the time values the user specified with -l h_rt=<time>
          and -l s_rt=<time> is returned in 'duration'. If neither of these
          time values were specified the default duration is used.

INPUTS
          u_long32 *duration   - Returns duration on success
          const lListElem *jep - The job (JB_Type)

RESULT
          bool - true on success

NOTES
          MT-NOTE: job_get_duration() is MT safe


3.2.4 job_lists_split_with_reference_to_max_running

NAME
          job_lists_split_with_reference_to_max_running()

SYNOPSIS
          void job_lists_split_with_reference_to_max_running(
                   lList **job_lists[],
                   lList **user_list,
                   const char* user_name,
                   int max_jobs_per_user)

FUNCTION
          Move those jobs which would exceed the configured
          'max_u_jobs' limit (schedd configuration) from
          job_lists[SPLIT_PENDING] into job_lists[SPLIT_PENDING_EXCLUDED].
          Only the jobs of the given 'user_name' will be handled. If
          'user_name' is NULL than all jobs will be handled whose job owner
          is mentioned in 'user_list'.

INPUTS
          lList **job_lists[]   - Array of JB_Type lists
          lList **user_list     - User list of Type JC_Type
          const char* user_name - user name
          int max_jobs_per_user - "max_u_jobs"

NOTE
          JC_jobs of the user elements contained in "user_list" has to be
          initialized properly before this function might be called.

SEE ALSO


3.2.5 job_move_first_pending_to_running

NAME
          job_move_first_pending_to_running() -- Move a job

SYNOPSIS
          void job_move_first_pending_to_running(lListElem **pending_job,
                                                 lList **splitted_jobs[])

FUNCTION
          Move the 'pending_job' from 'splitted_jobs[SPLIT_PENDING]'
          into 'splitted_jobs[SPLIT_RUNNING]'. If 'pending_job' is an
          array job, than the first task (task id) will be moved into
          'pending_job[SPLIT_RUNNING]'

INPUTS
          lListElem **pending_job - Pointer to a pending job (JB_Type)
          lList **splitted_jobs[] - (JB_Type) array of job lists

RETURNS
          bool - true, if the pending job was removed

SEE ALSO


3.2.6 split_jobs

NAME
          split_jobs() -- Split list of jobs according to their state

SYNOPSIS
          void split_jobs(lList **job_list, lList **answer_list,
                          u_long32 max_aj_instances,
                          lList **result_list[])

FUNCTION
          Split a list of jobs according to their state.
          'job_list' is the input list of jobs. The jobs in this list
          have different job states. For the dispatch algorithm only
          those jobs are of interest which are really pending. Jobs
          which are pending and in error state or jobs which have a
          hold applied (start time in future, administrator hold, ...)
          are not necessary for the dispatch algorithm.
          After a call to this function the jobs of 'job_list' may
          have been moved into one of the 'result_list's.
          Each of those lists containes jobs which have a certain state.
          (e.g. result_list[SPLIT_WAITING_DUE_TO_TIME] will contain
          all jobs which have to wait according to their start time.
          'max_aj_instances' are the maximum number of tasks of an
          array job which may be instantiated at the same time.
          'max_aj_instances' is used for the split decitions.
          In case of any error the 'answer_list' will be used to report
          errors (It is not used in the moment)

INPUTS
          lList **job_list          - JB_Type input list
          u_long32 max_aj_instances - max. num. of task instances
          lList **result_list[]     - Array of result list (JB_Type)

NOTES
          In former versions of SGE/EE we had 8 split functions.
          Each of those functions walked twice over the job list.
          This was time consuming in case of x thousand of jobs.
          
          We tried to improve this:
             - loop over all jobs only once
             - minimize copy operations where possible
          
          Unfortunately this function is heavy to understand now. Sorry!

SEE ALSO


3.2.7 trash_splitted_jobs

NAME
          trash_splitted_jobs() -- Trash all not needed job lists

SYNOPSIS
          void trash_splitted_jobs(lList **splitted_job_lists[])

FUNCTION
          Trash all job lists which are not needed for scheduling decisions.
          Before jobs and lists are trashed, scheduling messages will
          be generated.
          
          Following lists will be trashed:
             splitted_job_lists[SPLIT_ERROR]
             splitted_job_lists[SPLIT_HOLD]
             splitted_job_lists[SPLIT_WAITING_DUE_TO_TIME]
             splitted_job_lists[SPLIT_WAITING_DUE_TO_PREDECESSOR]
             splitted_job_lists[SPLIT_PENDING_EXCLUDED_INSTANCES]
             splitted_job_lists[SPLIT_PENDING_EXCLUDED]

INPUTS
          lList **splitted_job_lists[] - list of job lists

SEE ALSO


3.2.8 user_list_init_jc

NAME
          user_list_init_jc() -- inc. the # of jobs a user has running

SYNOPSIS
          void user_list_init_jc(lList **user_list,
                                 const lList *running_list)

FUNCTION
          Initialize "user_list" and JC_jobs attribute for each user according
          to the list of running jobs.

INPUTS
          lList **user_list          - JC_Type list
          const lList *running_list - JB_Type list

RESULT
          void - None


4 schedd


4.1 schedd_mes


4.1.1 schedd_mes_add

NAME
          schedd_mes_add() -- Add one entry into the message structure.

SYNOPSIS
          void schedd_mes_add(u_long32 job_number,
                              u_long32 message_number,
                              ...)

FUNCTION
          During the time the scheduler trys to dispatch jobs it might
          call this function to add messages into a temporary structure.
          This function might be called several times. Each call
          will add one element which containes one message describing
          a reason, why a job can't be dispatched and the concerned jid.
          
          When it is clear if the job could be dispatched or not, one of
          following functions has to be called:
          
             schedd_mes_commit()
             schedd_mes_rollback()

INPUTS
          u_long32 job_number     - job id
          u_long32 message_number - message number (sge_schedd_text.h)
          ...                     - arguments for format string
                                    sge_schedd_text(message_number)

NOTES
          MT-NOTE: schedd_mes_add() is MT safe

SEE ALSO


4.1.2 schedd_mes_add_global

NAME
          schedd_mes_add_global() -- add a global message

SYNOPSIS
          void schedd_mes_add_global(u_long32 message_number, ...)

FUNCTION
          Add a global message into a message structure.

INPUTS
          u_long32 message_number - message number (sge_schedd_text.h)
          ...                     - arguments for format string
                                    sge_schedd_text(message_number)

NOTES
          MT-NOTE: schedd_mes_add_global() is MT safe


4.1.3 schedd_mes_commit

NAME
          schedd_mes_commit() -- Complete message elements and move them

SYNOPSIS
          void schedd_mes_commit(lList *job_list, int ignore_category)

FUNCTION
          Each message contained in "tmp_sme" containes only
          one job id. We have to find other jobs in "job_list" and
          add the job ids to the list of ids contained in "tmp_sme"
          message elements. After that we have to move all messages
          contained in "tmp_sme" into "sme".
          
          If "ignore_category" is 1 than the job category will be ignored.
          This means thal all ids of "job_list" will be added to all
          messages contained in "tmp_sme".
          
          If no category is passed in and ignore_category is false, the messages
          are only generated for the current job, meaning, they are just copied.

INPUTS
          lList *job_list     - JB_Type list
          int ignore_category - if set to true, the messages with be generated for all jobs
                                in the list
          lRef jid_category   - if not NULL, the function uses the category to ensure, that
                                every message is only added per category once.


4.1.4 schedd_mes_initialize

NAME
          schedd_mes_initialize() -- Initialize module variables

SYNOPSIS
          void schedd_mes_initialize(void)

FUNCTION
          Initialize module variables


4.1.5 schedd_mes_obtain_package

NAME
          schedd_mes_obtain_package() -- Get message structure

SYNOPSIS
          lListElem *schedd_mes_obtain_packagevoid)

FUNCTION
          Returns message structure which containes all messages.

INPUTS
          int *global_mes_count - out: returns nr of global messages
          int *job_mes_count    - out: returns nr of job messages

NOTES
          The calling function is responsible to free the returned
          message structure if it is not needed anymore.

RESULT
          lListElem* - SME_Type element


4.1.6 schedd_mes_rollback

NAME
          schedd_mes_rollback() -- Free temporaryly generated messages

SYNOPSIS
          void schedd_mes_rollback(void)

FUNCTION
          Free temporaryly generated messages contained in "tmp_sme".


5 schedd_message


5.1 schedd_mes_add_join

NAME
          schedd_mes_add_join() -- same as schedd_mes_add, but joins messages based
                                   on the message id.

SYNOPSIS
          void schedd_mes_add_join(u_long32 job_number, u_long32 message_number,
          ...)

FUNCTION
          same as schedd_mes_add, but joins messages based
          on the message id. But it only uses the temp message
          list and not the global one.

INPUTS
          u_long32 job_number     - job id
          u_long32 message_number - message number (sge_schedd_text.h)
          ...                     - arguments for format string
                                    sge_schedd_text(message_number)

NOTES
          MT-NOTE: schedd_mes_add_join() is MT safe


5.2 schedd_mes_get_tmp_list

NAME
          schedd_mes_get_tmp_list() -- gets all messages for the current job

SYNOPSIS
          lList* schedd_mes_get_tmp_list()

FUNCTION
          returns a list of all messages for the current job

RESULT
          lList* -  message list


5.3 schedd_mes_set_tmp_list

NAME
          schedd_mes_set_tmp_list() -- sets the messages for a current job

SYNOPSIS
          void schedd_mes_set_tmp_list(lListElem *category, int name, int name, u_long32 job_number)

FUNCTION
          Takes a mesage list, changes the job number to the current job and stores
          the list.

INPUTS
          lListElem *category - an object, which stores the list
          int name            - element id for the list
          u_long32 job_number - job number


6 schedlib


6.1 ssi


6.1.1 –Simple-Scheduler-Interface

NAME
          Simple-Scheduler-Interface -- Interface for custom schedulers

FUNCTION
          SGE provides a very simple interface to custom schedulers.
          Such scheduler can be created using the event client or the
          event mirror interface.
          The interface provides functions to start a job and to
          delete a job.
          
          It was created to allow an easier integration of the MAUI scheduler
          into Grid Engine.

SEE ALSO


6.1.2 -Simple-Scheduler-Interface-Typedefs

NAME
          -Simple-Scheduler-Interface-Typedefs -- typedefs for the SSI

SYNOPSIS
          typedef struct {
             int procs;
             const char *host_name;
          } task_map;

FUNCTION
          With a task_map a jobs structure is described.
          A job can be spawned over an arbitrary number of hosts.
          A job has an arbitrary number of tasks per host.
          An array of task_map is used to pass information to ssi functions.
          It can contain any number of entries, the last entry has to contain
          0 as procs.

SEE ALSO


6.1.3 sge_ssi_job_cancel

NAME
          sge_ssi_job_cancel() -- delete or restart a job

SYNOPSIS
          bool sge_ssi_job_cancel(const char *job_identifier, bool reschedule)

FUNCTION
          Delete the given job.
          If reschedule is set to true, reschedule the job.

INPUTS
          const char *job_identifier - job identifier in the form
                                       <jobid>.<ja_task_id>, e.g. 123.1
          bool reschedule            - if true, reschedule job

RESULT
          bool - true, if the job could be successfully deleted (rescheduled),
                else false.

NOTES
          The reschedule parameter is igored in the current implementation.

SEE ALSO


6.1.4 sge_ssi_job_start

NAME
          sge_ssi_job_start() -- start a job

SYNOPSIS
          bool sge_ssi_job_start(const char *job_identifier, const char *pe,
                                task_map tasks[])

FUNCTION
          Start the job described by job_identifier, pe and tasks.
          job_identifier has to be given in the form "<job_id>.<ja_task_id>",
          e.g. "123.1" and must reference a pending job/array task.
          For parallel jobs, pe has to be the name of an existing parallel
          environment.
          tasks describes how many tasks are to be started per host.
          
          The function creates a scheduling order and sends it to qmaster.

INPUTS
          const char *job_identifier - unique job identifier
          const char *pe             - name of a parallel environment
                                       or NULL for sequential jobs
          task_map tasks[]           - mapping host->number of tasks

RESULT
          bool - true on success, else false

SEE ALSO


7 scheduler


7.1 parallel_maximize_slots_pe

NAME
          parallel_maximize_slots_pe() -- Maximize number of slots for an assignment

SYNOPSIS
          static int parallel_maximize_slots_pe(sge_assignment_t *best, lList *host_list,
          lList *queue_list, lList *centry_list, lList *acl_list)

FUNCTION
             The largest possible slot amount is searched for a job assuming a
             particular parallel environment is used at a particular start time.
             If the slot number passed is 0 we start with the minimum
             possible slot number for that job.
          
             To search most efficiently for the right slot value, it has three search
             strategies implemented:
             - binary search
             - least slot value first
             - highest slot value first
          
             To be able to use binary search all possible slot values are stored in
             one array. The slot values in this array are sorted ascending. After the
             right slot value is found, it is very easy to compute the best strategy
             from the result. For each strategy it will compute how many iterations
             would have been needed to compute the correct result. These steps will
             be stored for the next run and used to figure out the best algorithm.
             To ensure that we can adapt to rapid changes and also ignore spikes we
             are using the running average algorithm in a 80-20 setting. This means
             that the algorithm will need 4 (max 5) iterations to adopt to a new
             scenario.
          
          Further enhancements:
             It might be a good idea to store the derived values with the job categories
             and allow finding the best strategy per category.

INPUTS
          sge_assignment_t *best - herein we keep all important in/out information
          lList *host_list       - a list of all available hosts
          lList *queue_list      - a list of all available queues
          lList *centry_list     - a list of all available complex attributes
          lList *acl_list        - a list of all access lists

RESULT
          int - 0 ok got an assignment (maybe without maximizing it)
                1 no assignment at the specified time
               -1 assignment will never be possible for all jobs of that category
               -2 assignment will never be possible for that particular job

NOTES
             MT-NOTE: parallel_maximize_slots_pe() is MT safe as long as the provided
                      lists are owned be the caller
          
          SEE ALSO:
             sconf_best_pe_alg
             sconf_update_pe_alg
             add_pe_slots_to_category


7.2 parallel_reservation_max_time_slots

NAME
          parallel_reservation_max_time_slots() -- Search earliest possible assignment

SYNOPSIS
          static dispatch_t parallel_reservation_max_time_slots(sge_assignment_t *best)

FUNCTION
          The earliest possible assignment is searched for a job assuming a
          particular parallel environment be used with a particular slot
          number. If the slot number passed is 0 we start with the minimum
          possible slot number for that job. The search starts with the
          latest queue end time if DISPATCH_TIME_QUEUE_END was specified
          rather than a real time value.

INPUTS
          sge_assignment_t *best - herein we keep all important in/out information

RESULT
          dispatch_t - 0 ok got an assignment
                       1 no assignment at the specified time (???)
                      -1 assignment will never be possible for all jobs of that category
                      -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: parallel_reservation_max_time_slots() is not MT safe


7.3 sge_select_parallel_environment

NAME
          sge_select_parallel_environment() -- Decide about a PE assignment

SYNOPSIS
          static dispatch_t sge_select_parallel_environment(sge_assignment_t *best, lList
          *pe_list)

FUNCTION
          When users use wildcard PE request such as -pe <pe_range> 'mpi8_*'
          more than a single parallel environment can match the wildcard expression.
          In case of 'now' assignments the PE that gives us the largest assignment
          is selected. When scheduling a reservation we search for the earliest
          assignment for each PE and then choose that one that finally gets us the
          maximum number of slots.

IMPORTANT
          The scheduler info messages are not cached. They are added globally and have
          to be added for each job in the category. When the messages are updated
          this has to be changed.

INPUTS
          sge_assignment_t *best - herein we keep all important in/out information
          lList *pe_list         - the list of all parallel environments (PE_Type)

RESULT
          dispatch_t - 0 ok got an assignment
                       1 no assignment at the specified time (???)
                      -1 assignment will never be possible for all jobs of that category
                      -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: sge_select_parallel_environment() is not MT safe


8 sge_complex_schedd


8.1 build_name_filter

NAME
          build_name_filter() -- fills in an array with complex nams, which can be used
                                 as a filter.

SYNOPSIS
          void build_name_filter(const char **filter, lList *list, int t_name, int
          *pos)

FUNCTION
          Takes an array of a given size and fills in complex names.

INPUTS
          const char **filter     - target for the filter strings. It has to be of sufficant size.
          lList *list             - a list of complexes, from which the names are extracted
          int t_name              - specifies the field which is used as a name

NOTES
          ???


8.2 get_attribute_list

NAME
          get_attribute_list() -- generates a list for all defined elements in a queue, host, global

SYNOPSIS
          static lList* get_attribute_list(lListElem *global, lListElem *host,
          lListElem *queue, lList *centry_list)

FUNCTION
          Generates a list for all attributes defined at the given queue, host, global.

INPUTS
          lListElem *global  - global host
          lListElem *host    - host (or NULL, if only global attributes are important)
          lListElem *queue   - queue (or NULL if only host/global attributes are important)
          lList *centry_list - system wide attribute config list

RESULT
          static lList* - list of attributes or NULL, if no attributes exist.


8.3 get_attribute_list_by_names

NAME
          get_attribute_list_by_names() -- generates a list of attributes from the given names

SYNOPSIS
          static lList* get_attribute_list_by_names(lListElem *global, lListElem
          *host, lListElem *queue, lList *centry_list, lList *attrnames)

FUNCTION
          Assembles a list of attributes for a given queue, host, global, which contains all
          the specified elements. The general sort order is, global, host, queue. If an
          element could not be found, it will not exist. If no elements exist, the function
          will return NULL

INPUTS
          lListElem *global      - global host
          lListElem *host        - host (or NULL, if only global resources are asked for )
          lListElem *queue       - queue (or NULL, if only global / host resources are asked for)
          lList *centry_list     - the system wide attribut config list
          lList *attrnames       - ST_Type list of attribute names

RESULT
          static lList* - a CULL list of elements or NULL


8.4 is_attr_prior2

NAME
          is_attr_prior2() -- checks, if the set value in the structure has a higher priority
          than the new one

SYNOPSIS
          static bool is_attr_prior2(lListElem *upper_el, double lower_value, int
          t_value, int t_dominant)

FUNCTION
          Computes the priority between a given structure and its values and a new value. This
          is done on some basic rules. If the value is not set (dominant ==  DOMINANT_TYPE_VALUE)
          or which relational opperator is used. If this is not enough, the two values are compared
          and based on the opperator, it returns a true or false:
          if no value is set in the structure: false
          if the relops are == or != : true
          if the relops are >= or > : true, when the new value is smaller than the old one
          if the relops are <= or < : true, when the new value is bigger than the old one

INPUTS
          lListElem *upper_el - target structure
          double lower_value  - new value
          int t_value         - which field to use (CE_doubleval or CE_pj_doubleval)
          int t_dominant      - which dominant field to use (CE_dominant, CE_pj_dominant)

RESULT
          static bool - true, if the value in the structure has the higher priority


8.5 request_cq_rejected

NAME
          request_cq_rejected() -- Check, if -l request forecloses cluster queue

SYNOPSIS
          bool request_cq_rejected(const lList* hard_resource_list, const lListElem
          *cq, const lList *centry_list, dstring *unsatisfied)

FUNCTION
          Do -l matching with the aim to foreclose the entire cluster queue.
          Each cluster queue configuration profile must specify a fixed value
          otherwise we can't rule out a cluster queue. Both complex_values and
          queue resource limits are checked.

INPUTS
          const lList* hard_resource_list - resource list -l (CE_Type)
          const lListElem *cq             - cluster queue (CQ_Type)
          const lList *centry_list        - complex entry list (CE_Type)
          dstring *unsatisfied            - diagnosis information, if rejected

RESULT
          bool - true, if the cluster queue is ruled out

NOTES
          MT-NOTE: request_cq_rejected() is MT safe


9 sge_dlib

NAME
          sge_dlib() -- lookup, load, and cache function from a dynamic library

SYNOPSIS
          void *sge_dlib(const char *key, const char *lib_name, const char *fn_name,
                         lib_cache_t **lib_cache_list)

INPUTS
          const char *key - unique key for identifying function
          const char *lib_name - dynamic library name
          const char *fn_nam - function name
          lib_cache_t **lib_cache_list - cache list (if NULL, we use a global cache)

RETURNS
          void * - the address of the function

NOTES
          MT-NOTE: sge_free_load_list() is not MT safe


10 sge_job_schedd


10.1 sge_job_slot_request

NAME
          sge_job_slot_request() -- return static urgency jobs slot request

SYNOPSIS
          int sge_job_slot_request(lListElem *job, lList *pe_list)

FUNCTION
          For sequential jobs the static urgency job slot request is always 1.
          For parallel jobs the static urgency job slot request depends on
          static urgency slots as defined with sge_pe(5).

INPUTS
          lListElem *job - the job (JB_Type)
          lList *pe_list - the PE list (PE_Type)

RESULT
          int - Number of slots

NOTES
          In case of a wildcard parallel environment request the setting of the
          first matching is used. Behaviour is undefined if multiple parallel
          environments specify different settings!


10.2 task_get_duration

NAME
          task_get_duration() -- Determin tasks effective runtime limit

SYNOPSIS
          bool task_get_duration(u_long32 *duration, const lListElem *ja_task)

FUNCTION
          Determines the effictive runtime limit got by requested h_rt/s_rt or
          by the resulting queues h_rt/s_rt

INPUTS
          u_long32 *duration       - tasks duration in seconds
          const lListElem *ja_task - task element

RESULT
          bool - true

NOTES
          MT-NOTE: task_get_duration() is MT safe


11 sge_orders


11.1 sge_GetNumberOfOrders

NAME
          sge_GetNumberOfOrders() -- returns the number of orders generated

SYNOPSIS
          int sge_GetNumberOfOrders(order_t *orders)

FUNCTION
          returns the number of orders generated

INPUTS
          order_t *orders - a structure of orders

RESULT
          int - number of orders in the structure

NOTES
          MT-NOTE: sge_GetNumberOfOrders() is  MT safe


11.2 sge_add_schedd_info

NAME
          sge_add_schedd_info() -- retrieves the messages and generates an order out
                                   of it.

SYNOPSIS
          lList* sge_add_schedd_info(lList *or_list, int *global_mes_count, int
          *job_mes_count)

FUNCTION
          retrieves all messages, puts them into an order package, and frees the
          orginal messages. It also returns the number of global and job messages.

INPUTS
          lList *or_list        - int: the order list to which the message order is added
          int *global_mes_count - out: global message count
          int *job_mes_count    - out: job message count

RESULT
          lList* - the order list

NOTES
          MT-NOTE: sge_add_schedd_info() is not MT safe


11.3 sge_create_orders

NAME
          sge_create_orders() -- Create a new order-list or add orders to an existing one

SYNOPSIS
          lList* sge_create_orders(lList *or_list, u_long32 type, lListElem *job,
          lListElem *ja_task, lList *granted, bool update_execd)

FUNCTION
          - If the or_list is NULL, a new one will be generated
          
          - in case of a clear_pri order, teh ja_task is improtant. If NULL is put
            in for ja_task, only the pendin tasks of the spedified job are set to NULL.
            If a ja_task is put in, all tasks of the job are set to NULL

INPUTS
          lList *or_list     - the order list
          u_long32 type      - order type
          lListElem *job     - job
          lListElem *ja_task - ja_task ref or NULL(there is only one case, where it can be NULL)
          lList *granted     - granted queue list
          bool update_execd  - should the execd get new ticket values?

RESULT
          lList* - returns the orderlist

NOTES
          MT-NOTE: sge_create_orders() is MT safe

SEE ALSO


11.4 sge_join_orders

NAME
          sge_join_orders() -- generates one order list from the order structure

SYNOPSIS
          lLlist* sge_join_orders(order_t orders)

FUNCTION
          generates one order list from the order structure, and cleans the
          the order structure. The orders, which have been send already, are
          removed.

INPUTS
          order_t orders - the order strucutre

RESULT
          lLlist* - a order list

NOTES
          MT-NOTE: sge_join_orders() is not  safe


12 sge_pe_schedd


12.1 pe_match_static

NAME
          pe_match_static() -- Why not job to PE?

SYNOPSIS
          int pe_match_static(lListElem *job, lListElem *pe, lList *acl_list, bool
          only_static_checks)

FUNCTION
          Checks if PE is suited for the job.

INPUTS
          lListElem *job          - ???
          lListElem *pe           - ???
          lList *acl_list         - ???
          bool only_static_checks - ???

RESULT
          dispatch_t - DISPATCH_OK        ok
                       DISPATCH_NEVER_CAT assignment will never be possible for all
                                          jobs of that category

NOTES
          MT-NOTE: pe_restricted() is not MT safe


13 sge_qeti


13.1 sge_qeti_list_add

NAME
          sge_qeti_list_add() -- Adds a resource utilization to QETI resource list

SYNOPSIS
          static int sge_qeti_list_add(lList **lpp, const char *name, lList*
          rue_lp, double total, bool must_exist)

FUNCTION
          ???

INPUTS
          lList **lpp      - QETI resource list
          const char *name - Name of the resource
          lList* rue_lp    - Resource utilization entry (RUE_Type)
          double total     - Total resource amount
          bool must_exist  - If true the entry must exist in 'lpp'.

RESULT
          static int -  0 on success

NOTES
          MT-NOTE: sge_qeti_list_add() is not MT safe


13.2 sge_qeti_next_before

NAME
          sge_qeti_next_before() -- ???

SYNOPSIS
          void sge_qeti_next_before(sge_qeti_t *qeti, u_long32 start)

FUNCTION
          All queue end next references are set in a way that will
          sge_qeti_next() return a time value that is before (i.e. less than)
          start.

INPUTS
          sge_qeti_t *qeti - ???
          u_long32 start   - ???

NOTES
          MT-NOTE: sge_qeti_next_before() is MT safe


14 sge_resource_quota_schedd


14.1 check_and_debit_rqs_slots

NAME
          check_and_debit_rqs_slots() -- Determine RQS limit slot amount and debit

SYNOPSIS
          static void check_and_debit_rqs_slots(sge_assignment_t *a, const char
          *host, const char *queue, int *slots, int *slots_qend, dstring
          *rule_name, dstring *rue_name, dstring *limit_name)

FUNCTION
          The function determines the final slot and slots_qend amount due
          to all resource quota limitations that apply for the queue instance.
          Both slot amounts get debited from the a->limit_list to keep track
          of still available amounts per resource quota limit.

INPUTS
          sge_assignment_t *a - Assignment data structure
          const char *host    - hostname
          const char *queue   - queuename
          int *slots          - needed/available slots
          int *slots_qend     - needed/available slots_qend
          dstring *rule_name  - caller maintained buffer
          dstring *rue_name   - caller maintained buffer
          dstring *limit_name - caller maintained buffer

NOTES
          MT-NOTE: check_and_debit_rqs_slots() is MT safe


14.2 cqueue_shadowed

NAME
          cqueue_shadowed() -- Check for cluster queue rule before current rule

SYNOPSIS
          static bool cqueue_shadowed(const lListElem *rule, sge_assignment_t *a)

FUNCTION
          Check whether there is any cluster queue specific rule before the
          current rule.

INPUTS
          const lListElem *rule - Current rule
          sge_assignment_t *a   - Scheduler assignment

RESULT
          static bool - True if shadowed

EXAMPLE
          limit queue Q001 to F001=1
          limit host gridware to F001=0  (--> returns 'true' due to 'Q001' meaning
                                    that gridware can't be generally ruled out )

NOTES
          MT-NOTE: cqueue_shadowed() is MT safe


14.3 cqueue_shadowed_by

NAME
          cqueue_shadowed_by() -- Check rules shadowing current cluster queue rule

SYNOPSIS
          static bool cqueue_shadowed_by(const char *cqname, const lListElem *rule,
          sge_assignment_t *a)

FUNCTION
          Check if cluster queue in current rule is shadowed.

INPUTS
          const char *cqname    - Cluster queue name to check
          const lListElem *rule - Current rule
          sge_assignment_t *a   - Assignment

RESULT
          static bool - True if shadowed

EXAMPLE
          limits queues Q001,Q002 to F001=1
          limits queues Q002,Q003 to F001=1 (--> returns 'true' for Q002 and 'false' for Q003)

NOTES
          MT-NOTE: cqueue_shadowed_by() is MT safe


14.4 debit_job_from_rqs

NAME
          debit_job_from_rqs() -- debits job in all relevant resource quotas

SYNOPSIS
          int debit_job_from_rqs(lListElem *job, lList *granted, lListElem* pe,
          lList *centry_list)

FUNCTION
          The function debits in all relevant rule the requested amout of resources.

INPUTS
          lListElem *job     - job request (JB_Type)
          lList *granted     - granted list (JG_Type)
          lListElem* pe      - granted pe (PE_Type)
          lList *centry_list - consumable resouces list (CE_Type)

RESULT
          int - always 0

NOTES
          MT-NOTE: debit_job_from_rqs() is not MT safe


14.5 host_shadowed

NAME
          host_shadowed() -- Check for host rule before current rule

SYNOPSIS
          static bool host_shadowed(const lListElem *rule, sge_assignment_t *a)

FUNCTION
          Check whether there is any host specific rule before the
          current rule.

INPUTS
          const lListElem *rule - Current rule
          sge_assignment_t *a   - Scheduler assignment

RESULT
          static bool - True if shadowed

EXAMPLE
          limit host gridware to F001=1
          limit queue Q001 to F001=0  (--> returns 'true' due to 'gridware' meaning
                                    that Q001 can't be generally ruled out )

NOTES
          MT-NOTE: host_shadowed() is MT safe


14.6 host_shadowed_by

NAME
          host_shadowed_by() -- ???

SYNOPSIS
          static bool host_shadowed_by(const char *host, const lListElem *rule,
          sge_assignment_t *a)

FUNCTION
          Check if host in current rule is shadowed.

INPUTS
          const char *cqname    - Host name to check
          const lListElem *rule - Current rule
          sge_assignment_t *a   - Assignment

RESULT
          static bool - True if shadowed

EXAMPLE
          limits hosts host1,host2 to F001=1
          limits hosts host2,host3 to F001=1 (--> returns 'true' for host2 and 'false' for host3)

NOTES
          MT-NOTE: host_shadowed_by() is MT safe


14.7 is_cqueue_expand

NAME
          is_cqueue_expand() -- Returns true if rule expands on cluster queues

SYNOPSIS
          bool is_cqueue_expand(const lListElem *rule)

FUNCTION
          Returns true if rule expands on cluster queues.

INPUTS
          const lListElem *rule - RQR_Type

RESULT
          bool - True if rule expands on hosts

EXAMPLE
          "queues {*}" returns true
          "queues Q001,Q002" returns false

NOTES
          MT-NOTE: is_cqueue_expand() is MT safe


14.8 is_cqueue_global

NAME
          is_cqueue_global() -- Global rule with regards to cluster queues?

SYNOPSIS
          bool is_cqueue_global(const lListElem *rule)

INPUTS
          const lListElem *rule - RQR_Type

RESULT
          bool - True if cluster queues play no role with the rule

NOTES
          MT-NOTE: is_cqueue_global() is MT safe


14.9 is_host_expand

NAME
          is_host_expand() -- Returns true if rule expands on hosts

SYNOPSIS
          bool is_host_expand(const lListElem *rule)

FUNCTION
          Returns true if rule expands on hosts.

INPUTS
          const lListElem *rule - RQR_Type

RESULT
          bool - True if rule expands on hosts

EXAMPLE
          "hosts {*}" returns true
          "hosts @allhosts" returns false

NOTES
          MT-NOTE: is_host_expand() is MT safe


14.10 is_host_global

NAME
          is_host_global() -- Global rule with regards to hosts?

SYNOPSIS
          bool is_host_global(const lListElem *rule)

FUNCTION
          Return true if hosts play no role with the rule

INPUTS
          const lListElem *rule - RQR_Type

RESULT
          bool - True if hosts play no role with the rule

NOTES
          MT-NOTE: is_host_global() is MT safe


14.11 parallel_limit_slots_by_time

NAME
          parallel_limit_slots_by_time() -- Determine number of slots avail. within
                                            time frame

SYNOPSIS
          static dispatch_t parallel_limit_slots_by_time(const sge_assignment_t *a,
          lList *requests, int *slots, int *slots_qend, lListElem *centry, lListElem
          *limit, dstring rue_name)

FUNCTION
          ???

INPUTS
          const sge_assignment_t *a - job info structure (in)
          lList *requests           - Job request list (CE_Type)
          int *slots                - out: free slots
          int *slots_qend           - out: free slots in the far far future
          lListElem *centry         - Load information for the resource
          lListElem *limit          - limitation (RQRL_Type)
          dstring rue_name          - rue_name saved in limit sublist RQRL_usage
          lListElem *qep            - queue instance (QU_Type)

RESULT
          static dispatch_t - DISPATCH_OK        got an assignment
                            - DISPATCH_NEVER_CAT no assignment for all jobs of that category

NOTES
          MT-NOTE: parallel_limit_slots_by_time() is not MT safe

SEE ALSO


14.12 parallel_rqs_slots_by_time

NAME
          parallel_rqs_slots_by_time() -- Determine number of slots avail within
                                           time frame

SYNOPSIS
          dispatch_t parallel_rqs_slots_by_time(const sge_assignment_t *a,
          int *slots, int *slots_qend, const char *host, const char *queue)

FUNCTION
          This function iterates for a queue instance over all resource quota sets
          and evaluates the number of slots available.

INPUTS
          const sge_assignment_t *a - job info structure (in)
          int *slots                - out: # free slots
          int *slots_qend           - out: # free slots in the far far future
          lListElem *qep            - QU_Type Elem

RESULT
          static dispatch_t - DISPATCH_OK        got an assignment
                            - DISPATCH_NEVER_CAT no assignment for all jobs of that category

NOTES
          MT-NOTE: parallel_rqs_slots_by_time() is not MT safe

SEE ALSO


14.13 rqs_by_slots

NAME
          rqs_by_slots() -- Check queue instance suitability due to RQS

SYNOPSIS
          dispatch_t rqs_by_slots(sge_assignment_t *a, const char *queue,
          const char *host, u_long32 *tt_rqs_all, bool *is_global,
          dstring *rue_string, dstring *limit_name, dstring *rule_name)

FUNCTION
          Checks (or determines earliest time) queue instance suitability
          according to resource quota set limits.
          
          For performance reasons RQS verification results are cached in
          a->limit_list. In addition unsuited queues and hosts are collected
          in a->skip_cqueue_list and a->skip_host_list so that ruling out
          chunks of queue instance becomes quite cheap.

INPUTS
          sge_assignment_t *a  - assignment
          const char *queue    - cluster queue name
          const char *host     - host name
          u_long32 *tt_rqs_all - returns earliest time over all resource quotas
          bool *is_global      - returns true if result is valid for any other queue
          dstring *rue_string  - caller maintained buffer
          dstring *limit_name  - caller maintained buffer
          dstring *rule_name   - caller maintained buffer
          u_long32 tt_best     - time of best solution found so far

RESULT
          static dispatch_t - usual return values

NOTES
          MT-NOTE: rqs_by_slots() is MT safe


14.14 rqs_can_optimize

NAME
          rqs_can_optimize() -- Poke whether a queue/host negation can be made

SYNOPSIS
          static void rqs_can_optimize(const lListElem *rule, bool *host, bool
          *queue, sge_assignment_t *a)

FUNCTION
          A global limit was hit with 'rule'. This function helps to determine
          to what extent we can profit from that situation. If there is no
          previous matching rule within the same rule set any other queue/host
          can be skipped.

INPUTS
          const lListElem *rule - Rule
          bool *host            - Any previous rule with a host scope?
          bool *queue           - Any previous rule with a queue scope?
          sge_assignment_t *a   - Scheduler assignment

NOTES
          MT-NOTE: rqs_can_optimize() is MT safe


14.15 rqs_exceeded_sort_out

NAME
          rqs_exceeded_sort_out() -- Rule out queues/hosts whenever possible

SYNOPSIS
          bool rqs_exceeded_sort_out(sge_assignment_t *a, const lListElem *rule,
          const dstring *rule_name, const char* queue_name, const char* host_name)

FUNCTION
          This function tries to rule out hosts and cluster queues after a
          quota exceeding was found for a limitation rule with specific queue
          instance.
          
          When a limitation was exceeded that applies to the entire
          cluster 'true' is returned, 'false' otherwise.

INPUTS
          sge_assignment_t *a      - Scheduler assignment type
          const lListElem *rule    - The exceeded rule
          const dstring *rule_name - Name of the rule (monitoring only)
          const char* queue_name   - Cluster queue name
          const char* host_name    - Host name

RESULT
          bool - True upon global limits exceeding

NOTES
          MT-NOTE: rqs_exceeded_sort_out() is MT safe


14.16 rqs_exceeded_sort_out_par

NAME
          rqs_exceeded_sort_out_par() -- Rule out queues/hosts whenever possible

SYNOPSIS
          void rqs_exceeded_sort_out_par(sge_assignment_t *a, const lListElem
          *rule, const dstring *rule_name, const char* queue_name, const char*
          host_name)

FUNCTION
          Function wrapper around rqs_exceeded_sort_out() for parallel jobs.
          In contrast to the sequential case global limit exceeding is handled
          by adding all cluster queue names to the a->skip_cqueue_list.

INPUTS
          sge_assignment_t *a      - Scheduler assignment type
          const lListElem *rule    - The exceeded rule
          const dstring *rule_name - Name of the rule (monitoring only)
          const char* queue_name   - Cluster queue name
          const char* host_name    - Host name

NOTES
          MT-NOTE: rqs_exceeded_sort_out_par() is MT safe


14.17 rqs_excluded_cqueues

NAME
          rqs_excluded_cqueues() -- Find excluded queues

SYNOPSIS
          static void rqs_excluded_cqueues(const lListElem *rule, sge_assignment_t *a)

FUNCTION
          Find queues that are excluded by previous rules.

INPUTS
          const lListElem *rule    - The rule
          sge_assignment_t *a      - Scheduler assignment

EXAMPLE
          limit        projects {*} queues !Q001 to F001=1
          limit        to F001=0   ( ---> returns Q001 in a->skip_cqueue_list)

NOTES
          MT-NOTE: rqs_excluded_cqueues() is MT safe


14.18 rqs_excluded_hosts

NAME
          rqs_excluded_hosts() -- Find excluded hosts

SYNOPSIS
          static void rqs_excluded_hosts(const lListElem *rule, sge_assignment_t *a)

FUNCTION
          Find hosts that are excluded by previous rules.

INPUTS
          const lListElem *rule    - The rule
          sge_assignment_t *a      - Scheduler assignment

EXAMPLE
          limit        projects {*} queues !gridware to F001=1
          limit        to F001=0   ( ---> returns gridware in skip_host_list)

NOTES
          MT-NOTE: rqs_excluded_hosts() is MT safe


14.19 rqs_expand_cqueues

NAME
          rqs_expand_cqueues() -- Add all matching cqueues to the list

SYNOPSIS
          void rqs_expand_cqueues(const lListElem *rule)

FUNCTION
          The names of all cluster queues that match the rule are added to
          the skip list without duplicates.

INPUTS
          const lListElem *rule    - RQR_Type

NOTES
          MT-NOTE: rqs_expand_cqueues() is not MT safe


14.20 rqs_expand_hosts

NAME
          rqs_expand_hosts() -- Add all matching hosts to the list

SYNOPSIS
          void rqs_expand_hosts(const lListElem *rule, lList **skip_host_list,
          const lList *host_list, lList *hgrp_list)

FUNCTION
          The names of all hosts that match the rule are added to
          the skip list without duplicates.

INPUTS
          const lListElem *rule  - RQR_Type
          const lList *host_list - EH_Type

NOTES
          MT-NOTE: rqs_expand_hosts() is MT safe


14.21 rqs_limitation_reached

NAME
          rqs_limitation_reached() -- is the limitation reached for a queue instance

SYNOPSIS
          static bool rqs_limitation_reached(sge_assignment_t *a, lListElem *rule,
          const char* host, const char* queue)

FUNCTION
          The function verifies no limitation is reached for the specific job request
          and queue instance

INPUTS
           sge_assignment_t *a    - job info structure
           const lListElem *rule        - resource quota rule (RQR_Type)
           const char* host       - host name
           const char* queue      - queue name
          u_long32 *start         - start time of job

RESULT
          static dispatch_t - DISPATCH_OK job can be scheduled
                              DISPATCH_NEVER_CAT no jobs of this category will be scheduled
                              DISPATCH_NOT_AT_TIME job can be scheduled later
                              DISPATCH_MISSING_ATTR rule does not match requested attributes

NOTES
          MT-NOTE: rqs_limitation_reached() is not MT safe


14.22 rqs_match_assignment

NAME
          rqs_match_assignment() -- match resource quota rule any queue instance

SYNOPSIS
          static bool rqs_match_assignment(const lListElem *rule, sge_assignment_t
          *a)

FUNCTION
          Check whether a resource quota rule can match any queue instance. If
          if does not match due to users/projects/pes scope one can rule this
          out.
          
          Note: As long as rqs_match_assignment() is not used for parallel jobs
                passing NULL as PE request is perfectly fine.

INPUTS
          const lListElem *rule - Resource quota rule
          sge_assignment_t *a   - Scheduler assignment

RESULT
          static bool - True if it matches

NOTES
          MT-NOTE: rqs_match_assignment() is MT safe


14.23 rqs_set_dynamical_limit

NAME
          rqs_set_dynamical_limit() -- evaluate dynamical limit

SYNOPSIS
          bool rqs_set_dynamical_limit(lListElem *limit, lListElem
          *global_host, lListElem *exec_host, lList *centry)

FUNCTION
          The function evaluates if necessary the dynamical limit for a host and
          sets the evaluated double value in the given limitation element (RQRL_dvalue).
          
          An evaluation is necessary if the limit boolean RQRL_dynamic is true. This
          field is set by qmaster during the rule set verification

INPUTS
          lListElem *limit       - limitation (RQRL_Type)
          lListElem *global_host - global host (EH_Type)
          lListElem *exec_host   - exec host (EH_Type)
          lList *centry          - consumable resource list (CE_Type)

RESULT
          bool - always true

NOTES
          MT-NOTE: rqs_set_dynamical_limit() is MT safe


14.24 sge_user_is_referenced_in_rqs

NAME
          sge_user_is_referenced_in_rqs() -- search for user reference in rqs

SYNOPSIS
          bool sge_user_is_referenced_in_rqs(const lList *rqs, const char *user,
          lList *acl_list)

FUNCTION
          Search for a user reference in the resource quota sets

INPUTS
          const lList *rqs - resource quota set list
          const char *user  - user to search
          const char *group - user's group
          lList *acl_list   - acl list for user resolving

RESULT
          bool - true if user was found
                 false if user was not found

NOTES
          MT-NOTE: sge_user_is_referenced_in_rqs() is MT safe


15 sge_resource_utilization


15.1 add_calendar_to_schedule

NAME
          add_calendar_to_schedule() -- addes the queue calendar to the resource
                                        schedule

SYNOPSIS
          static void add_calendar_to_schedule(lList *queue_list)

FUNCTION
          Adds the queue calendars to the resource schedule. It is using
          the slot entry for simulating and enabled / disabled calendar.

INPUTS
          lList *queue_list - all queues, which can posibly run jobs
          u_long32 now      - now time of assignment

NOTES
          MT-NOTE: add_calendar_to_schedule() is MT safe

SEE ALSO


15.2 add_job_utilization

NAME
          add_job_utilization() -- Debit assignments' utilization from all schedules

SYNOPSIS
          int add_job_utilization(const sge_assignment_t *a, const char *type)

FUNCTION
          The resouce utilization of an assignment is debited into the schedules
          of global, host and queue instance resource containers and limitation
          rule sets. For parallel jobs debiting is made done from the parallel
          environment schedule.

INPUTS
          const sge_assignment_t *a - The assignment
          const char *type          - A string that is used to monitor assignment
                                      type
          bool for_job_scheduling   - utilize for job or for advance reservation

RESULT
          int -

NOTES
          MT-NOTE: add_job_utilization() is MT safe


15.3 newResourceElem

NAME
          newResourceElem() -- creates new resource schedule entry

SYNOPSIS
          static lListElem* newResourceElem(u_long32 time, double amount)

FUNCTION
          creates new resource schedule entry and returns it

INPUTS
          u_long32 time - specific time
          double amount - the utilized amount

RESULT
          static lListElem* - new resource schedule entry

NOTES
          MT-NOTE: newResourceElem() is MT safe

SEE ALSO


15.4 prepare_resource_schedules

NAME
          prepare_resource_schedules() -- Debit non-pending jobs in resource schedule

SYNOPSIS
          static void prepare_resource_schedules(const lList *running_jobs, const
          lList *suspended_jobs, lList *pe_list, lList *host_list, lList
          *queue_list, lList *centry_list, lList *rqs_list)

FUNCTION
          In order to reflect current and future resource utilization of running
          and suspended jobs in the schedule we iterate through all jobs and debit
          resources requested by those jobs.

INPUTS
          const lList *running_jobs   - The running ones (JB_Type)
          const lList *suspended_jobs - The susepnded ones (JB_Type)
          lList *pe_list              - ???
          lList *host_list            - ???
          lList *queue_list           - ???
          lList *rqs_list             - configured resource quota sets
          lList *centry_list          - ???
          lList *acl_list             - ???
          lList *hgroup_list          - ???
          lList *prepare_resource_schedules - create schedule for job or advance reservation
                                              scheduling
          bool for_job_scheduling     - prepare for job or for advance reservation
          u_long32 now                - now time of assignment

NOTES
          MT-NOTE: prepare_resource_schedules() is not MT safe


15.5 rqs_add_job_utilization

NAME
          rqs_add_job_utilization() -- Debit assignment's utilization in a limitation
                                       rule

SYNOPSIS
          static int rqs_add_job_utilization(lListElem *jep, u_long32 task_id,
          const char *type, lListElem *rule, dstring rue_name, lList *centry_list,
          int slots, const char *obj_name, u_long32 start_time, u_long32 end_time,
          bool is_master_task)

FUNCTION
          ???

INPUTS
          lListElem *jep       - job element (JB_Type)
          u_long32 task_id     - task id to debit
          const char *type     - String denoting type of utilization entry
          lListElem *rule      - limitation rule (RQR_Type)
          dstring rue_name     - rue_name where to debit
          lList *centry_list   - master centry list (CE_Type)
          int slots            - slots to debit
          const char *obj_name - name of the object where to debit
          u_long32 start_time  - start time of utilization
          u_long32 end_time    - end time of utilization
          bool is_master_task  - is this the master task going to be debit

RESULT
          static int - number of modified limits

NOTES
          MT-NOTE: rqs_add_job_utilization() is MT safe

SEE ALSO


15.6 serf_exit

NAME
          serf_exit() -- Closes SERF

SYNOPSIS
          void serf_exit(void)

FUNCTION
          All operations requited to cleanly shutdown the SERF are done.

NOTES
          MT-NOTE: serf_exit() is MT safe


15.7 serf_init

NAME
          serf_init() -- Initializes SERF

SYNOPSIS
          void serf_init(record_schedule_entry_func_t write, new_schedule_func_t
          newline)

NOTES
          MT-NOTE: serf_init() is not MT safe


15.8 serf_new_interval

NAME
          serf_new_interval() -- Indicate a new scheduling run

SYNOPSIS
          void serf_new_interval(u_long32 time)

FUNCTION
          When a new scheduling run is started serf_new_interval() shall be
          called to indicate this. This allows assigning of schedule entry
          records to different schedule runs.

INPUTS
          u_long32 time - The time when the schedule run was started.

NOTES
          MT-NOTE: (1) serf_new_interval() is MT safe if no recording function
          MT-NOTE:     was registered via serf_init().
          MT-NOTE: (2) Otherwise MT safety of serf_new_interval() depends on
          MT-NOTE:     MT safety of registered recording function


15.9 serf_record_entry

NAME
          serf_record_entry() -- Add a new schedule entry record

SYNOPSIS
          void serf_record_entry(u_long32 job_id, u_long32 ja_taskid, const char
          *state, u_long32 start_time, u_long32 end_time, char level_char, const
          char *object_name, const char *name, double utilization)

FUNCTION
          The entirety of all information passed to this function describes
          the schedule that was created during a scheduling interval of a
          Grid Engine scheduler. To reflect multiple resource debitations
          of a job multiple calls to serf_record_entry() are required. For
          parallel jobs the serf_record_entry() is called one times with a
          'P' as level_char.

INPUTS
          u_long32 job_id         - The job id
          u_long32 ja_taskid      - The task id
          const char *type        - A string indicating the reason why the
                                    utilization was put into the schedule:
          
                                    RUNNING    - Job was running before scheduling run
                                    SUSPENDED  - Job was suspended before scheduling run
                                    MIGRATING  - Job being preempted (unused)
                                    STARTING   - Job will be started
                                    RESERVING  - Job reserves resources
          
          u_long32 start_time     - Start of the resource utilization
          
          u_long32 end_time       - End of the resource utilization
          
          char level_char         - Q - Queue
                                    H - Host
                                    G - Global
                                    P - Parallel Environment (PE)
          
          const char *object_name - Name of Queue/Host/Global/PE
          
          const char *name        - Resource name
          
          double utilization      - Utilization amount

NOTES
          MT-NOTE: (1) serf_record_entry() is MT safe if no recording function
          MT-NOTE:     was registered via serf_init().
          MT-NOTE: (2) Otherwise MT safety of serf_record_entry() depends on
          MT-NOTE:     MT safety of registered recording function


15.10 set_utilization

NAME
          set_utilization() -- adds one specific calendar entry to the resource schedule

SYNOPSIS
          static void set_utilization(lList *uti_list, u_long32 from, u_long32
          till, double uti)

FUNCTION
          This set utilization function is unique for calendars. It removes all other
          uti settings in the given time interval and replaces it with the given one.

INPUTS
          lList *uti_list - the uti list for a specifiy resource and queue
          u_long32 from   - starting time for this uti
          u_long32 till   - endtime for this uti.
          double uti      - utilization (needs to bigger than 1 (schould be max)

NOTES
          MT-NOTE: set_utilization() is MT safe

SEE ALSO


15.11 sge_qeti_first

NAME
          sge_qeti_first() --

SYNOPSIS
          u_long32 sge_qeti_first(sge_qeti_t *qeti)

FUNCTION
          Initialize/Reinitialize Queue End Time Iterator. All queue end next
          references are initialized to the queue end of all resourece instances.
          Before we return the time that is most in the future queue end next
          references are switched to the next entry that is earlier than the time
          that was returned.

INPUTS
          sge_qeti_t *qeti - ???

RESULT
          u_long32 -

NOTES
          MT-NOTE: sge_qeti_first() is MT safe


15.12 sge_qeti_next

NAME
          sge_qeti_next() -- ???

SYNOPSIS
          u_long32 sge_qeti_next(sge_qeti_t *qeti)

FUNCTION
          Return next the time that is most in the future. Then queue end next
          references are switched to the next entry that is earlier than the time
          that was returned.

INPUTS
          sge_qeti_t *qeti - ???

RESULT
          u_long32 -

NOTES
          MT-NOTE: sge_qeti_next() is MT safe


15.13 sge_qeti_release

NAME
          sge_qeti_release() -- Release queue end time iterator

SYNOPSIS
          void sge_qeti_release(sge_qeti_t *qeti)

FUNCTION
          Release all resources of the queue end time iterator. Refered
          resource utilization diagrams are not affected.

INPUTS
          sge_qeti_t *qeti - ???

NOTES
          MT-NOTE: sge_qeti_release() is MT safe


15.14 utilization_add

NAME
          utilization_add() -- Debit a jobs resource utilization

SYNOPSIS
          int utilization_add(lListElem *cr, u_long32 start_time, u_long32
          duration, double utilization, u_long32 job_id, u_long32 ja_taskid,
          u_long32 level, const char *object_name, const char *type)

FUNCTION
          A jobs resource utilization is debited into the resource
          utilization diagram at the given time for the given duration.

INPUTS
          lListElem *cr           - Resource utilization entry (RUE_Type)
          u_long32 start_time     - Start time of utilization
          u_long32 duration       - Duration
          double utilization      - Amount
          u_long32 job_id         - Job id
          u_long32 ja_taskid      - Task id
          u_long32 level          - *_TAG
          const char *object_name - The objects name
          const char *type        - String denoting type of utilization entry.
          bool is_job             - reserve for job or for advance reservation
          bool implicit_non_exclusive - add implicit entry for non-exclusive jobs
                                        requesting a exclusive centry

RESULT
          int - 0 on success

NOTES
          MT-NOTE: utilization_add() is not MT safe


15.15 utilization_below

NAME
          utilization_below() -- Determine earliest time util is below max_util

SYNOPSIS
          u_long32 utilization_below(const lListElem *cr, double max_util, const
          char *object_name)

FUNCTION
          Determine and return earliest time utilization is below max_util.

INPUTS
          const lListElem *cr     - Resource utilization entry (RUE_utilized)
          double max_util         - The maximum utilization we're asking
          const char *object_name - Name of the queue/host/global for monitoring
                                    purposes.
          bool for_excl_request   - match for exclusive request

RESULT
          u_long32 - The earliest time or DISPATCH_TIME_NOW.

NOTES
          MT-NOTE: utilization_below() is MT safe


15.16 utilization_max

NAME
          utilization_max() -- Determine max utilization within timeframe

SYNOPSIS
          double utilization_max(const lListElem *cr, u_long32 start_time, u_long32
          duration)

FUNCTION
          Determines the maximum utilization at the given timeframe.

INPUTS
          const lListElem *cr - Resource utilization entry (RUE_utilized)
          u_long32 start_time - Start time of the timeframe
          u_long32 duration   - Duration of timeframe
          bool for_excl_request - For exclusive request

RESULT
          double - Maximum utilization

NOTES
          MT-NOTE: utilization_max() is MT safe


15.17 utilization_print_to_dstring

NAME
          utilization_print_to_dstring() -- Print resource utilization to dstring

SYNOPSIS
          bool utilization_print_to_dstring(const lListElem *this_elem, dstring
          *string)

FUNCTION
          Print resource utlilzation as plain number to dstring.

INPUTS
          const lListElem *this_elem - A RUE_Type element
          dstring *string            - The string

RESULT
          bool - error state
             true  - success
             false - error

NOTES
          MT-NOTE: utilization_print_to_dstring() is MT safe


15.18 utilization_queue_end

NAME
          utilization_queue_end() -- Determine utilization at queue end time

SYNOPSIS
          double utilization_queue_end(const lListElem *cr)

FUNCTION
          Determine utilization at queue end time. Jobs that last until
          ever can cause a non-zero utilization.

INPUTS
          const lListElem *cr - Resource utilization entry (RUE_utilized)
          bool for_excl_request - For exclusive request

RESULT
          double - queue end utilization

NOTES
          MT-NOTE: utilization_queue_end() is MT safe


16 sge_schedd_text


16.1 sge_get_schedd_text

NAME
          sge_get_schedd_text() -- transformes a id into a info message

SYNOPSIS
          const char* sge_get_schedd_text(int nr)

FUNCTION
          transforms an id into an info message

INPUTS
          int nr - info id

RESULT
          const char* -  info message

NOTES
          MT-NOTE: sge_get_schedd_text() is MT safe

SEE ALSO


17 sge_select_queue


17.1 access_cq_rejected

NAME
          access_cq_rejected() -- Check, if cluster queue rejects user/project

SYNOPSIS
          static bool access_cq_rejected(const char *user, const char *group, const
          lList *acl_list, const lListElem *cq)

FUNCTION
          ???

INPUTS
          const char *user      - Username
          const char *group     - Groupname
          const lList *acl_list - List of access list definitions
          const lListElem *cq   - Cluster queue

RESULT
          static bool - True, if rejected

NOTES
          MT-NOTE: access_cq_rejected() is MT safe


17.2 add_pe_slots_to_category

NAME
          add_pe_slots_to_category() -- defines an array of valid slot values

SYNOPSIS
          static bool add_pe_slots_to_category(category_use_t *use_category,
          u_long32 *max_slotsp, lListElem *pe, int min_slots, int max_slots, lList
          *pe_range)

FUNCTION
          In case of pe ranges this function allocates memory and fills it with
          valid pe slot values. If a category is set, it stores them the category
          for further jobs.

INPUTS
          category_use_t *use_category - category caching structure, must not be NULL
          u_long32 *max_slotsp         - number of different slot settings
          lListElem *pe                - pe, must not be NULL
          int min_slots                - min slot setting (pe range)
          int max_slots                - max slot setting (pe range)
          lList *pe_range              - pe range, must not be NULL

RESULT
          static bool - true, if successful

NOTES
          MT-NOTE: add_pe_slots_to_category() is MT safe


17.3 clean_up_parallel_job

NAME
          clean_up_parallel_job() -- removes tags

SYNOPSIS
          static void clean_up_parallel_job(sge_assignment_t *a)

FUNCTION
          during pe job dispatch are man queues and hosts tagged. This
          function removes the tags.

INPUTS
          sge_assignment_t *a - the resource structure
          
          

NOTES
          MT-NOTE: clean_up_parallel_job() is not MT safe


17.4 clear_resource_tags

NAME
          clear_resource_tags() -- removes the tags from a resource request.

SYNOPSIS
          static void clear_resource_tags(lList *resources, u_long32 max_tag)

FUNCTION
          Removes the tags from the given resource list. A tag is only removed
          if it is smaller or equal to the given tag value. The tag value "MAX_TAG" results
          in removing all existing tags, or the value "HOST_TAG" removes queue and host
          tags but keeps the global tags.

INPUTS
          lList *resources  - list of job requests.
          u_long32 max_tag - max tag element


17.5 compute_soft_violations

NAME
          compute_soft_violations() -- counts the violations in the request for a given host or queue

SYNOPSIS
          static int compute_soft_violations(lListElem *queue, int violation, lListElem *job,lList *load_attr, lList *config_attr,
                                    lList *actual_attr, lList *centry_list, u_long32 layer, double lc_factor, u_long32 tag)

FUNCTION
          this function checks if the current resources can satisfy the requests. The resources come from the global host, a
          given host or the queue. The function returns the number of violations.

INPUTS
          const sge_assignment_t *a - job info structure
          lListElem *queue     - should only be set, when one using this method on queue level
          int violation        - the number of previous violations. This is needed to get a correct result on queue level.
          lList *load_attr     - the load attributes, only when used on hosts or global
          lList *config_attr   - a list of custom attributes  (CE_Type)
          lList *actual_attr   - a list of custom consumables, they contain the current usage of these attributes (RUE_Type)
          u_long32 layer       - the current layer flag
          double lc_factor     - should be set, when load correction has to be done.
          u_long32 tag         - the current layer tag. (GLOGAL_TAG, HOST_TAG, QUEUE_TAG)

RESULT
          static int - the number of violations ( = (prev. violations) + (new violations in this run)).


17.6 cqueue_match_static

NAME
          cqueue_match_static() -- Does cluster queue match the job?

SYNOPSIS
          static dispatch_t cqueue_match_static(const char *cqname,
          sge_assignment_t *a)

FUNCTION
          The function tries to find reasons (-q, -l and -P) why the
          entire cluster is not suited for the job.

INPUTS
          const char *cqname  - Cluster queue name
          sge_assignment_t *a - ???

RESULT
          static dispatch_t - Returns DISPATCH_OK  or DISPATCH_NEVER_CAT

NOTES
          MT-NOTE: cqueue_match_static() is MT safe


17.7 fill_category_use_t

NAME
          fill_category_use_t() -- fills the category_use_t structure.

SYNOPSIS
          void fill_category_use_t(sge_assignment_t *a, category_use_t
          *use_category, const char *pe_name)

FUNCTION
          If a cache structure for the given PE does not exist, it
          will generate the necessary data structures.

INPUTS
          sge_assignment_t *a          - job info structure (in)
          category_use_t *use_category - category info structure (out)
          const char* pe_name          - the current pe name or "NONE"

NOTES
          MT-NOTE: fill_category_use_t() is MT safe


17.8 get_attribute

NAME
          get_attribute() -- looks for an attribut, but only for one level (for host, global, or queue)

SYNOPSIS
          static lListElem* get_attribute(const char *attrname, lList *config_attr,
          lList *actual_attr, lList *load_attr, lList *centry_list, lListElem
          *queue, lListElem *rep, u_long32 layer, double lc_factor, dstring *reason)

FUNCTION
          Extracts the attribut specified with 'attrname' and finds the
          more important one, if it is defined multiple times on the same
          level. It only cares about one level.
          If the attribute is a consumable, one can specify a point in time and a duration.
          This will get the caller the min amount of that resource during the time frame.

INPUTS
          const char *attrname - attribute name one is looking for
          lList *config_attr   - user defined attributes (CE_Type)
          lList *actual_attr   - current usage of consumables (RUE_Type)
          lList *load_attr     - load attributes
          lList *centry_list   - the system wide attribute configuration
          lListElem *queue     - the current queue, or null, if one works on hosts
          u_long32 layer       - the current layer
          double lc_factor     - the load correction value
          dstring *reason      - space for error messages or NULL
          bool zero_utilization - ???
          u_long32 start_time  - begin of the time interval, one asks for the resource
          u_long32 duration    - the duration the interval

RESULT
          static lListElem* - the element one was looking for or NULL


17.9 get_attribute_by_Name

NAME
          get_attribute_by_Name() -- returns an attribut by name

SYNOPSIS
          void lListElem* get_attribute_by_Name(lListElem* global, lListElem *host,
          lListElem *queue, const char* attrname, lList *centry_list, char *
          reason, int reason_size)

FUNCTION
          It looks into the different configurations on host, global and queue and returns
          the attribute, which was asked for. It the attribut is defined multiple times, only
          the valid one is returned.

INPUTS
          lListElem* global    - the global host
          lListElem *host      - a given host can be null, than only the  global host is important
          lListElem *queue     - a queue on the given host, can be null, than only the host and global ist important
          const char* attrname - the attribut name one is looking ofr
          lList *centry_list   - the system wide attribut config list
          char *reason         - memory for the error message
          int reason_size      - the max length of an error message

RESULT
          void lListElem* - the element one is looking for (a copy) or NULL.


17.10 get_queue_resource

NAME
          get_queue_resource() -- extracts attribut information from the queue

SYNOPSIS
          static lListElem* get_queue_resource(lListElem *queue, lList *centry_list, const char *attrname)

FUNCTION
          All fixed queue attributes are directly coded into the queue structure. These have to extraced
          and formed into a CE structure. That is, what this function does. It takes a name for an attribut
          and returns a full CE structure, if the attribut is set in the queue. Otherwise it returns NULL.

INPUTS
          lListElem *queue_elem -
          lListElm  *queue      -
          const char *attrname  - name of the attribute.

RESULT
          bool -
          


17.11 host_time_by_slots

NAME
          host_time_by_slots() -- Return time when host slots are available

SYNOPSIS
          int host_time_by_slots(int slots, u_long32 *start, u_long32 duration,
          int *host_soft_violations, lListElem *job, lListElem *ja_task, lListElem
          *hep, lList *centry_list, lList *acl_list)

FUNCTION
          The time when the specified slot amount is available at the host
          is determined. Behaviour depends on input/output parameter start
          
          DISPATCH_TIME_NOW
                0 an assignment is possible now
                1 no assignment now but later
               -1 assignment never possible for all jobs of the same category
               -2 assignment never possible for that particular job
          
          <any other time>
                0 an assignment is possible at the specified time
                1 no assignment at specified time but later
               -1 assignment never possible for all jobs of the same category
               -2 assignment never possible for that particular job
          
          DISPATCH_TIME_QUEUE_END
                0 an assignment is possible and the start time is returned
               -1 assignment never possible for all jobs of the same category
               -2 assignment never possible for that particular job

INPUTS
          int slots                 - ???
          u_long32 *start           - ???
          u_long32 duration         - ???
          int *host_soft_violations - ???
          lListElem *job            - ???
          lListElem *ja_task        - ???
          lListElem *hep            - ???
          lList *centry_list        - ???
          lList *acl_list           - ???


17.12 interactive_cq_rejected

NAME
          interactive_cq_rejected() --  Check, if -now yes rejects cluster queue

SYNOPSIS
          static bool interactive_cq_rejected(const lListElem *cq)

FUNCTION
          Returns true if -now yes jobs can not be run in cluster queue

INPUTS
          const lListElem *cq - cluster queue (CQ_Type)

RESULT
          static bool - True, if rejected

NOTES
          MT-NOTE: interactive_cq_rejected() is MT safe


17.13 is_attr_prior

NAME
          is_attr_prior() -- compares two attribut instances with each other

SYNOPSIS
          static bool is_attr_prior(lListElem *upper_el, lListElem *lower_el)

FUNCTION
          checks if the first given attribut instance has a higher priority than
          second instance.
          if the first is NULL, it returns false
          if the second or the second and first is NULL, it returns true
          if the "==" or "!=" operators are used, it is true
          if both are the same, it may returns false.
          otherwise it computes the minimum or maximum between the values.

INPUTS
          lListElem *upper_el - attribut, which should be overridden by the second one.
          lListElem *lower_el - attribut, which want to override the first one.

RESULT
          static bool - true, when the first attribut has a higher priority.


17.14 is_requested

NAME
          is_requested() -- Returns true if specified resource is requested.

SYNOPSIS
          bool is_requested(lList *req, const char *attr)

FUNCTION
          Returns true if specified resource is requested. Both long name
          and shortcut name are checked.

INPUTS
          lList *req       - The request list (CE_Type)
          const char *attr - The resource name.

RESULT
          bool - true if requested, otherwise false

NOTES
          MT-NOTE: is_requested() is MT safe


17.15 load_locate_elem

NAME
          load_locate_elem() -- locates a consumable category in the given load list

SYNOPSIS
          static lListElem* load_locate_elem(lList *load_list, lListElem
          *global_consumable, lListElem *host_consumable, lListElem
          *queue_consumable)

INPUTS
          lList *load_list             - the load list to work on
          lListElem *global_consumable - a ref to the global consumable
          lListElem *host_consumable   - a ref to the host consumable
          lListElem *queue_consumable  - a ref to the queue consumable

RESULT
          static lListElem* - NULL, or the category element from the load list

NOTES
          MT-NOTE: load_locate_elem() is MT safe

SEE ALSO


17.16 load_np_value_adjustment

NAME
          load_np_value_adjustment() -- adjusts np load values for the number of processors

SYNOPSIS
          static int load_np_value_adjustment(const char* name, lListElem *hep,
          double *load_correction)

FUNCTION
          Tests the load value name for "np_*". If this pattern is found, it will
          retrieve the number of processors and adjusts the load_correction accordingly.
          If the pattern is not found, it does nothing and returns 0 for number of processors.

INPUTS
          const char* name        - load value name
          lListElem *hep          - host object
          double *load_correction - current load_correction for further corrections

RESULT
          static int - number of processors, or 0 if it was called on a none np load value

NOTES
          MT-NOTE: load_np_value_adjustment() is MT safe


17.17 match_static_advance_reservation

NAME
          match_static_advance_reservation() -- Do matching that depends not on queue
                                                or host

SYNOPSIS
          static dispatch_t match_static_advance_reservation(const sge_assignment_t
          *a)

FUNCTION
          Checks whether a job that requests a advance reservation can be scheduled.
          The job can be scheduled if the advance reservation is in state "running".

INPUTS
          const sge_assignment_t *a - assignment to match

RESULT
          static dispatch_t - DISPATCH_OK on success
                              DISPATCH_NEVER_CAT on error

NOTES
          MT-NOTE: match_static_advance_reservation() is MT safe


17.18 parallel_assignment

NAME
          parallel_assignment() -- Can we assign with a fixed PE/slot/time

SYNOPSIS
          int parallel_assignment(sge_assignment_t *assignment)

FUNCTION
          Returns if possible an assignment for a particular PE with a
          fixed slot at a fixed time.

INPUTS
          sge_assignment_t *a -
          category_use_t *use_category - has information on how to use the job category

RESULT
          dispatch_t -  0 ok got an assignment
                        1 no assignment at the specified time
                       -1 assignment will never be possible for all jobs of that category
                       -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: parallel_assignment() is not MT safe


17.19 parallel_available_slots

NAME
          parallel_available_slots() -- Check if number of PE slots is available

RESULT
          dispatch_t - 0 ok got an assignment
                       1 no assignment at the specified time
                      -1 assignment will never be possible for all jobs of that category

NOTES
          MT-NOTE: parallel_available_slots() is not MT safe


17.20 parallel_host_slots

NAME
          parallel_host_slots() -- Return host slots available at time period

FUNCTION
          The maximum amount available at the host for the specified time period
          is determined.
          


17.21 parallel_tag_hosts_queues

NAME
          parallel_tag_hosts_queues() -- Determine host slots and tag queue(s) accordingly

FUNCTION
          For a particular job the maximum number of slots that could be served
          at that host is determined in accordance with the allocation rule and
          returned. The time of the assignment can be either DISPATCH_TIME_NOW
          or a specific time, but never DISPATCH_TIME_QUEUE_END.
          
          In those cases when the allocation rule allows more than one slot be
          served per host it is necessary to also consider per queue possibly
          specified load thresholds. This is because load is global/per host
          concept while load thresholds are a queue attribute.
          
          In those cases when the allocation rule gives us neither a fixed amount
          of slots required nor an upper limit for the number per host slots (i.e.
          $fill_up and $round_robin) we must iterate through all slot numbers from
          1 to the maximum number of slots "total_slots" and check with each slot
          amount whether we can get it or not. Iteration stops when we can't get
          more slots the host based on the queue limitations and load thresholds.
          
          As long as only one single queue at the host is eligible for the job the
          it is sufficient to check with each iteration whether the corresponding
          number of slots can be served by the host and it's queue or not. The
          really sick case however is when multiple queues are eligible for a host:
          Here we have to determine in each iteration step also the maximum number
          of slots each queue could get us by doing a per queue iteration from the
          1 up to the maximum number of slots we're testing. The optimization in
          effect here is to check always only if we could get more slots than with
          the former per host slot amount iteration.

INPUTS
          sge_assignment_t *a          -
          lListElem *hep               - current host
          lListElem *global            - global host
          int *slots                   - out: # free slots
          int *slots_qend              - out: # free slots in the far far future
          int global_soft_violations   - # of global soft violations
          bool *master_host            - out: if true, found a master host
          category_use_t *use_category - int/out : how to use the job category

RESULT
          static dispatch_t -  0 ok got an assignment
                               1 no assignment at the specified time
                              -1 assignment will never be possible for all jobs of that category
                              -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: parallel_tag_hosts_queues() is not MT safe


17.22 parallel_tag_queues_suitable4job

NAME
          parallel_tag_queues_suitable4job() -- Tag queues/hosts for
             a comprehensive/parallel assignment

SYNOPSIS
          static int parallel_tag_queues_suitable4job(sge_assignment_t
                     *assignment)

FUNCTION
          We tag the number of available slots for that job at global, host and
          queue level under consideration of all constraints of the job. We also
          mark those queues that are suitable as a master queue as possible master
          queues and count the number of violations of the job's soft request.
          The method below is named comprehensive since it does the tagging game
          for the whole parallel job and under consideration of all available
          resources that could help to satisfy the job's request. This is necessary
          to prevent consumable resource limitation at host/global level multiple
          times.
          
          While tagging we also set queues QU_host_seq_no based on the sort
          order of each host. Assumption is the host list passed is sorted
          according to the load formula.

INPUTS
          sge_assignment_t *assignment - ???
          category_use_t use_category - information on how to use the job category

RESULT
          static dispatch_t - 0 ok got an assignment
                              1 no assignment at the specified time
                             -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: parallel_tag_queues_suitable4job() is not MT safe


17.23 pe_cq_rejected

NAME
          pe_cq_rejected() -- Check, if -pe pe_name rejects cluster queue

SYNOPSIS
          static bool pe_cq_rejected(const char *pe_name, const lListElem *cq)

FUNCTION
          Match a jobs -pe 'pe_name' with pe_list cluster queue configuration.
          True is returned if the parallel environment has no access.

INPUTS
          const char *project - the pe request of a job (no wildcard)
          const lListElem *cq - cluster queue (CQ_Type)

RESULT
          static bool - True, if rejected

NOTES
          MT-NOTE: pe_cq_rejected() is MT safe


17.24 project_cq_rejected

NAME
          project_cq_rejected() -- Check, if -P project rejects cluster queue

SYNOPSIS
          static bool project_cq_rejected(const char *project, const lListElem *cq)

FUNCTION
          Match a jobs -P 'project' with project/xproject cluster queue configuration.
          True is returned if the project has no access.

INPUTS
          const char *project - the project of a job or NULL
          const lListElem *cq - cluster queue (CQ_Type)

RESULT
          static bool - True, if rejected

NOTES
          MT-NOTE: project_cq_rejected() is MT safe


17.25 rc_time_by_slots

NAME
          rc_time_by_slots() -- checks weather all resource requests on one level
                                  are fulfilled

SYNOPSIS
          static int rc_time_by_slots(lList *requested, lList *load_attr, lList
          *config_attr, lList *actual_attr, lList *centry_list, lListElem *queue,
          bool allow_non_requestable, char *reason, int reason_size, int slots,
          u_long32 layer, double lc_factor, u_long32 tag)

FUNCTION
          Checks, weather all requests, default requests and implicit requests on this
          this level are fulfilled.
          
          With reservation scheduling the earliest start time due to resources of the
          resource container is the maximum of the earliest start times for all
          resources comprised by the resource container that requested by a job.

INPUTS
          lList *requested          - list of attribute requests
          lList *load_attr          - list of load attributes or null on queue level
          lList *config_attr        - list of user defined attributes
          lList *actual_attr        - usage of all consumables (RUE_Type)
          lList *centry_list        - system wide attribute config. list (CE_Type)
          lListElem *queue          - current queue or NULL on global/host level
          bool allow_non_requestable - allow none requestabales?
          char *reason              - error message
          int reason_size           - max error message size
          int slots                 - number of slots the job is looking for
          u_long32 layer            - current layer flag
          double lc_factor          - load correction factor
          u_long32 tag              - current layer tag
          u_long32 *start_time      - in/out argument for start time
          u_long32 duration         - jobs estimated total run time
          const char *object_name   - name of the object used for monitoring purposes

RESULT
          dispatch_t -

NOTES
             MT-NOTES: is not thread save. uses a static buffer
          
          Important:
             we have some special behavior, when slots is set to -1.


17.26 ri_slots_by_time

NAME
          ri_slots_by_time() -- Determine number of slots avail. within time frame

SYNOPSIS
          static dispatch_t ri_slots_by_time(const sge_assignment_t *a, int *slots,
          int *slots_qend, lList *rue_list, lListElem *request, lList *load_attr,
          lList *total_list, lListElem *queue, u_long32 layer, double lc_factor,
          dstring *reason, bool allow_non_requestable, bool no_centry, const char
          *object_name)

FUNCTION
          The number of slots available with a resource can be zero for static
          resources or is determined based on maximum utilization within the
          specific time frame, the total amount of the resource and the per
          task request of the parallel job (ri_slots_by_time())

INPUTS
          const sge_assignment_t *a  - ???
          int *slots                 - Returns maximum slots that can be served
                                       within the specified time frame.
          int *slots_qend            - Returns the maximum possible number of slots
          lList *rue_list            - Resource utilization (RUE_Type)
          lListElem *request         - Job request (CE_Type)
          lList *load_attr           - Load information for the resource
          lList *total_list          - Total resource amount (CE_Type)
          lListElem *queue           - Queue instance (QU_Type) for queue-based resources
          u_long32 layer             - DOMINANT_LAYER_{GLOBAL|HOST|QUEUE}
          double lc_factor           - load correction factor
          dstring *reason            - diagnosis information if no rsrc available
          bool allow_non_requestable - ???
          bool no_centry             - ???
          const char *object_name    - ???

RESULT
          static dispatch_t -

NOTES
          MT-NOTE: ri_slots_by_time() is not MT safe


17.27 ri_time_by_slots

NAME
          ri_time_by_slots() -- Determine availability time through slot number

SYNOPSIS
          int ri_time_by_slots(lListElem *rep, lList *load_attr, lList
          *config_attr, lList *actual_attr, lList *centry_list, lListElem *queue,
          char *reason, int reason_size, bool allow_non_requestable, int slots,
          u_long32 layer, double lc_factor)

FUNCTION
          Checks for one level, if one request is fulfilled or not.
          
          With reservation scheduling the earliest start time due to
          availability of the resource instance is determined by ensuring
          non-consumable resource requests are fulfilled or by finding the
          earliest time utilization of a consumable resource is below the
          threshold required for the request.

INPUTS
          sge_assignment_t *a       - assignment object that holds job specific scheduling relevant data
          lListElem *rep            - requested attribute
          lList *load_attr          - list of load attributes or null on queue level
          lList *config_attr        - list of user defined attributes (CE_Type)
          lList *actual_attr        - usage of user consumables (RUE_Type)
          lListElem *queue          - the current queue, or null on host level
          dstring *reason           - target for error message
          bool allow_non_requestable - allow none requestable attributes?
          int slots                 - the number of slots the job is looking for?
          u_long32 layer            - the current layer
          double lc_factor          - load correction factor
          u_long32 *start_time      - in/out argument for start time
          const char *object_name   - name of the object used for monitoring purposes

RESULT
          dispatch_t -


17.28 sequential_tag_queues_suitable4job

NAME
          sequential_tag_queues_suitable4job() -- ???

FUNCTION
          The start time of a queue is always returned using the QU_available_at
          field.
          
          The overall behaviour of this function is somewhat dependent on the
          value that gets passed to assignment->start and whether soft requests
          were specified with the job:
          
          (1) In case of now assignments (DISPATCH_TIME_NOW) only the first queue
              suitable for jobs without soft requests is tagged. When soft requests
              are specified all queues must be verified and tagged in order to find
              the queue that fits best.
          
          (2) In case of reservation assignments (DISPATCH_TIME_QUEUE_END) the earliest
              time is searched when the resources of global/host/queue are sufficient
              for the job. The time-wise iteration is then done for each single resources
              instance.
          
              Actually there are cases when iterating through all queues were not
              needed: (a) if there was a global limitation search could stop once
              a queue is found that causes no further delay (b) if job has
              a soft request search could stop once a queue is found with minimum (=0)
              soft violations.

INPUTS
          sge_assignment_t *assignment - job info structure

RESULT
          dispatch_t - 0 ok got an assignment
                         start time(s) and slots are tagged
                       1 no assignment at the specified time
                      -1 assignment will never be possible for all jobs of that category
                      -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: sequential_tag_queues_suitable4job() is not MT safe


17.29 sge_call_pe_qsort

NAME
          sge_call_pe_qsort() -- call the Parallel Environment qsort plug-in

SYNOPSIS
          void sge_call_pe_qsort(sge_assignment_t *a, const char *qsort_args)

INPUTS
          sge_assignment_t *a - PE assignment
          qsort_args - the PE qsort_args attribute

NOTES
          MT-NOTE: sge_call_pe_qsort() is not MT safe


17.30 sge_create_load_list

NAME
          sge_create_load_list() -- create the control structure for consumables as
                                    load thresholds

SYNOPSIS
          void sge_create_load_list(const lList *queue_list, const lList
          *host_list, const lList *centry_list, lList **load_list)

FUNCTION
          scans all queues for consumables as load thresholds. It builds a
          consumable category for each queue which is using consumables as a load
          threshold.
          If no consumables are used, the *load_list is set to NULL.

INPUTS
          const lList *queue_list  - a list of queue instances
          const lList *host_list   - a list of hosts
          const lList *centry_list - a list of complex entries
          lList **load_list        - a ref to the target load list

NOTES
          MT-NOTE: sge_create_load_list() is MT safe

SEE ALSO


17.31 sge_free_load_list

NAME
          sge_free_load_list() -- frees the load list and sets it to NULL

SYNOPSIS
          void sge_free_load_list(lList **load_list)

INPUTS
          lList **load_list - the load list

NOTES
          MT-NOTE: sge_free_load_list() is MT safe

SEE ALSO


17.32 sge_host_match_static

NAME
          sge_host_match_static() -- Static test whether job fits to host

SYNOPSIS
          static int sge_host_match_static(lListElem *job, lListElem *ja_task,
          lListElem *host, lList *centry_list, lList *acl_list)

INPUTS
          lListElem *job     - ???
          lListElem *ja_task - ???
          lListElem *host    - ???
          lList *centry_list - ???
          lList *acl_list    - ???

RESULT
          int - 0 ok
               -1 assignment will never be possible for all jobs of that category
               -2 assignment will never be possible for that particular job


17.33 sge_load_list_alarm

NAME
          sge_load_list_alarm() -- checks if queues went into an alarm state

SYNOPSIS
          bool sge_load_list_alarm(lList *load_list, const lList *host_list, const
          lList *centry_list)

FUNCTION
          The function uses the cull bitfield to identify modifications in one of
          the consumable elements. If the consumption has changed, the load for all
          queue referencing the consumable is recomputed. If a queue exceeds it
          load threshold, QU_tagged4schedule is set to 1.

INPUTS
          lList *load_list         - ???
          const lList *host_list   - ???
          const lList *centry_list - ???

RESULT
          bool - true, if at least one queue was set into alarm state

NOTES
          MT-NOTE: sge_load_list_alarm() is MT safe

SEE ALSO


17.34 sge_queue_match_static

NAME
          sge_queue_match_static() -- Do matching that depends not on time.

SYNOPSIS
          static int sge_queue_match_static(lListElem *queue, lListElem *job,
          const lListElem *pe, const lListElem *ckpt, lList *centry_list, lList
          *host_list, lList *acl_list)

FUNCTION
          Checks if a job fits on a queue or not. All checks that depend on the
          current load and resource situation must get handled outside.
          The queue also gets tagged in QU_tagged4schedule to indicate whether it
          is specified using -masterq queue_list.

INPUTS
          lListElem *queue      - The queue we're matching
          lListElem *job        - The job
          const lListElem *pe   - The PE object
          const lListElem *ckpt - The ckpt object
          lList *centry_list    - The centry list
          lList *acl_list       - The ACL list

RESULT
          dispatch_t - DISPATCH_OK, ok
                       DISPATCH_NEVER_CAT, assignment will never be possible for all jobs of that category


17.35 sge_remove_queue_from_load_list

NAME
          sge_remove_queue_from_load_list() -- removes queues from the load list

SYNOPSIS
          void sge_remove_queue_from_load_list(lList **load_list, const lList
          *queue_list)

INPUTS
          lList **load_list       - load list structure
          const lList *queue_list - queues to be removed from it.

NOTES
          MT-NOTE: sge_remove_queue_from_load_list() is MT safe

SEE ALSO


17.36 sge_select_queue

NAME
          sge_select_queue() -- checks whether a job matches a given queue or host

SYNOPSIS
          int sge_select_queue(lList *requested_attr, lListElem *queue, lListElem
          *host, lList *exechost_list, lList *centry_list, bool
          allow_non_requestable, int slots)

FUNCTION
          Takes the requested attributes from a job and checks if it matches the given
          host or queue. One and only one should be specified. If both, the function
          assumes, that the queue belongs to the given host.

INPUTS
          lList *requested_attr     - list of requested attributes
          lListElem *queue          - current queue or null if host is set
          lListElem *host           - current host or null if queue is set
          lList *exechost_list      - list of all hosts in the system
          lList *centry_list        - system wide attribute config list
          bool allow_non_requestable - allow non requestable?
          int slots                 - number of requested slots
          lList *queue_user_list    - list of users or null
          lList *acl_list           - acl_list or null
          lListElem *job            - job or null

RESULT
          int - 1, if okay, QU_tag will be set if a queue is selected
                0, if not okay

NOTES
          The caller is responsible for cleaning tags.
          
          No range is used. For serial jobs we will need a call for hard and one
           for soft requests. For parallel jobs we will call this function for each
          -l request. Because of in serial jobs requests can be simply added.
          In Parallel jobs each -l requests a different set of queues.


17.37 sge_sequential_assignment

NAME
          sge_sequential_assignment() -- Make an assignment for a sequential job.

SYNOPSIS
          int sge_sequential_assignment(sge_assignment_t *assignment)

FUNCTION
          For sequential job assignments all the earliest job start time
          is determined with each queue instance and the earliest one gets
          chosen. Secondary criterion for queue selection minimizing jobs
          soft requests.
          
          The overall behaviour of this function is somewhat dependent on the
          value that gets passed to assignment->start and whether soft requests
          were specified with the job:
          
          (1) In case of now assignments (DISPATCH_TIME_NOW) only the first queue
              suitable for jobs without soft requests is tagged. When soft requests
              are specified all queues must be verified and tagged in order to find
              the queue that fits best. On success the start time is set
          
          (2) In case of queue end assignments (DISPATCH_TIME_QUEUE_END)
          

INPUTS
          sge_assignment_t *assignment - ???

RESULT
          int - 0 ok got an assignment + time (DISPATCH_TIME_NOW and DISPATCH_TIME_QUEUE_END)
                1 no assignment at the specified time
               -1 assignment will never be possible for all jobs of that category
               -2 assignment will never be possible for that particular job

NOTES
          MT-NOTE: sge_sequential_assignment() is not MT safe


17.38 sge_split_queue_slots_free

NAME
          sge_split_queue_slots_free() -- ???

SYNOPSIS
          int sge_split_queue_slots_free(lList **free, lList **full)

FUNCTION
          Split queue list into queues with at least one slots and queues with
          less than one free slot. The list optionally returned in full gets the
          QNOSLOTS queue instance state set.

INPUTS
          lList **free - Input queue instance list and return free slots.
          lList **full - If non-NULL the full queue instances get returned here.

RESULT
          int - 0 success
               -1 error


18 sge_sharetree_printing


18.1 print_hdr

NAME
          print_hdr() -- print a header for the sharetree dump

SYNOPSIS
          void
          print_hdr(dstring *out, const format_t *format)

FUNCTION
          Prints a header for data output using the sge_sharetree_print function.

INPUTS
          dstring *out           - dstring into which data will be written
          const format_t *format - format description

NOTES
          MT-NOTE: print_hdr() is MT-safe

SEE ALSO


18.2 sge_sharetree_print

NAME
          sge_sharetree_print() -- dump sharetree information to a dstring

SYNOPSIS
          void sge_sharetree_print(dstring *out, lList *sharetree, lList *users,
                                   lList *projects, lList *config,
                                   bool group_nodes, bool decay_usage,
                                   const char **names, const format_t *format)

FUNCTION
          Dumps information about a sharetree into a given dstring. Information
          is appended.
          
          Outputs information like times, node (user/project) names, configured
          shares, actually received shares, targeted shares, usage information
          like cpu, memory and io.
          
          It is possible to restrict the number of fields that are output.
          
          Header information and formatting can be configured.

INPUTS
          dstring *out           - dstring into which data will be written
          lList *sharetree       - the sharetree to dump
          lList *users           - the user list
          lList *projects        - the project list
          lList *config          - the scheduler configuration list
          bool group_nodes       - ???
          bool decay_usage       - ???
          const char **names     - fields to output
          const format_t *format - format description

NOTES
          MT-NOTE: sge_sharetree_print() is  MT-safe

SEE ALSO


19 sge_urgency


19.1 sge_do_urgency

NAME
          sge_do_urgency() -- Compute normalized urgency

SYNOPSIS
          void sge_do_urgency(u_long32 now, lList *running_jobs, lList
          *pending_jobs, sge_Sdescr_t *lists)

FUNCTION
          Determine normalized urgency for all job lists passed:
          * for the pending jobs we need it for determine dispatch order
          * for the running jobs it is needed when running jobs priority must
            be compared with pending jobs (preemption only)

INPUTS
          u_long32 now        - Current time
          lList *running_jobs - The running jobs list
          lList *pending_jobs - The pending jobs list
          sge_Sdescr_t *lists - Additional config information


19.2 sge_normalize_urgency

NAME
          sge_normalize_urgency() -- Computes normalized urgency for job list

SYNOPSIS
          static void sge_normalize_urgency(lList *job_list, double
          min_urgency, double max_urgency)

FUNCTION
          The normalized urgency is determined for a list of jobs based on the
          min/max urgency values passed and the JB_urg value of each job.

INPUTS
          lList *job_list           - The job list
          double min_urgency - minimum urgency value
          double max_urgency - maximum urgency value

NOTES
          MT-NOTES: sge_normalize_urgency() is MT safe


19.3 sge_normalize_value

NAME
          sge_normalize_value() -- Returns normalized value with passed value range

SYNOPSIS
          double sge_normalize_value(double value, double range_min, double
          range_max)

FUNCTION
          The value passed is normalized and resulting value (0.0-1.0) is returned
          The value range passed is assumed. In case there is no range because
          min/max are (nearly) equal 0.5 is returned.

INPUTS
          double value     - Value to be normalized.
          double range_min - Range minimum value.
          double range_max - Range maximum value.

RESULT
          double - Normalized value (0.0-1.0)

NOTES
          MT-NOTE: sge_normalize_value() is MT safe


19.4 sge_urgency

NAME
          sge_urgency() -- Determine urgency value for a list of jobs

SYNOPSIS
          static void sge_urgency(u_long32 now, double *min_urgency,
          double *max_urgency, lList *job_list, const lList *centry_list,
          const lList *pe_list)

FUNCTION
          The urgency value is determined for all jobs in job_list. The urgency
          value has two time dependent components (waiting time contribution and
          deadline contribution) and a resource request dependent component. Only
          resource requests that apply to the job irrespective what resources it
          gets assigned finally are considered. Default requests specified for
          consumable resources are not considered as they are placement dependent.
          For the same reason soft request do not contribute to the urgency value.
          The urgency value range is tracked via min/max urgency. Category-based
          caching is used for the resource request urgency contribution.

INPUTS
          u_long32 now               - Current time
          double *min_urgency - For tracking minimum urgency value
          double *max_urgency - For tracking minimum urgency value
          lList *job_list            - The jobs.
          const lList *centry_list   - Needed for per resource urgency setting.
          const lList *pe_list       - Needed to determine urgency slot setting.


20 sgeee


20.1 build_functional_categories

NAME
          build_functional_categories() --  sorts the pending jobs into functional categories

SYNOPSIS
          void build_functional_categories(sge_ref_t *job_ref, int num_jobs,
          sge_fcategory_t **root, int dependent)

FUNCTION
          Generates a list of functional categories. Each category contains a list of jobs
          which belongs to this category. A functional category is assembled of:
          - job shares
          - user shares
          - department shares
          - project shares
          Alljobs with the same job, user,... shares are put in the same fcategory.

INPUTS
          sge_ref_t *job_ref     - array of pointers to the job reference structure
          int num_jobs           - number of elements in the job_ref array
          sge_fcategory_t **root - root pointer to the functional category list
          sge_ref_list_t ** ref_array - has to be a pointer to NULL pointer. The memory
                                        will be allocated
                                        in this function and freed with free_fcategories.
          int dependent          - does the functional tickets depend on prior computed tickets?
          u_long32 job_tickets   - job field, which has the tickets ( JB>_jobshare, JB_override_tickets)
          u_long32 up_tickets    - source for the user/department tickets/shares (UP_fshare, UP_otickets)
          u_long32 dp_tickets    - source for the department tickets/shares (US_fshare, US_oticket)

OUTPUT
          u_long32 - number of jobs in the categories

NOTES
             - job classes are ignored.
          
          IMPROVEMENTS:
             - the stored values in the functional category structure can be used to speed up the
               ticket calculation. This will avoid unnecessary CULL accesses in the function
               calc_job_functional_tickets_pass1
             - A further improvement can be done by:
                - limiting the job list length in each category to the max nr of jobs calculated
                - Sorting the jobs in each functional category by its job category. Each resulting
                  job list can be of max size of open slots. This will result in a correct ftix result
                  for all jobs, which might be scheduled.

BUGS
          ???


20.2 calc_intern_pending_job_functional_tickets

NAME
          calc_intern_pending_job_functional_tickets() -- calc ftix for pending jobs

SYNOPSIS
          void calc_intern_pending_job_functional_tickets(sge_fcategory_t *current,
                                         double sum_of_user_functional_shares,
                                         double sum_of_project_functional_shares,
                                         double sum_of_department_functional_shares,
                                         double sum_of_job_functional_shares,
                                         double total_functional_tickets,
                                         double weight[])

FUNCTION
          This is an optimized and incomplete version of calc_pending_job_functional_tickets.
          It is good enough to get the order right within the inner loop of the ftix
          calculation.

INPUTS
          sge_fcategory_t *current                   - current fcategory
          double sum_of_user_functional_shares
          double sum_of_project_functional_shares
          double sum_of_department_functional_shares
          double sum_of_job_functional_shares
          double total_functional_tickets
          double weight[]                            - destribution of the shares to each other
          

NOTES
          be carefull using it

BUGS
          ???


20.3 calculate_pending_shared_override_tickets

NAME
          calculate_pending_shared_override_tickets() -- calculate shared override tickets

SYNOPSIS
          static void calculate_pending_shared_override_tickets(sge_ref_t *job_ref,
          int num_jobs, int dependent)

FUNCTION
             We calculate the override tickets for pending jobs, which are shared. The basic
             algorithm looks like this:
          
             do for each pending job
                do for each pending job which isn't yet considered active
                      consider the job active
                      calculate override tickets for that job
                      consider the job not active
                  end do
                  consider the job with the highest priority (taking into account all previous polices + override tickets) as active
             end do
          
             set all pending jobs none active
          
          Since this algorithm is very expensive, we split all pending jobs into fcategories. The algorithm changes to:
          
            max_jobs = build fcategories and ignore jobs, which would get 0 override tickets
          
             do for max_jobs pending job
                do for each fcategory
          
                   take take first job from category
                   consider the job active
                   calculate override tickets for that job
                   consider the job not active
                   store job with the most override tickets = job_max
          
                end do
                set job_max active and remove it from its fcategory.
                remove job_max fcategory, if job_max was the last job
             end;
          
             set all pending jobs none active
          
          
          That's it. It is very simillar to the functional ticket calculation, except, that we are working with tickts and
          not with shares.

INPUTS
          sge_ref_t *job_ref - an array of job structures (first running, than pennding)
          int num_jobs       - number of jobs in the array
          int dependent      - do other ticket policies depend on this one?

NOTES
          MT-NOTE: calculate_pending_shared_override_tickets() is MT safe

SEE ALSO


20.4 copy_ftickets

NAME
          copy_ftickets() -- copy the ftix from one job to an other one

SYNOPSIS
          void copy_ftickets(sge_ref_list_t *source, sge_ref_list_t *dest)

FUNCTION
          Copy the functional tickets and ref fields used for ftix calculation
          from one job to an other job.

INPUTS
          sge_ref_list_t *source - source job
          sge_ref_list_t *dest   - dest job

BUGS
          ???


20.5 destribute_ftickets

NAME
          destribute_ftickets() -- ensures, that all jobs have ftix asoziated with them.

SYNOPSIS
          void destribute_ftickets(sge_fcategory_t *root, int dependent)

FUNCTION
          After the functional tickets are calculated, only the first job in the fcategory
          job list has ftix. This function copies the result from the first job to all
          other jobs in the same list and sums the job ticket count with the ftix.

INPUTS
          sge_fcategory_t *root - fcategory list
          int dependent         - does the final ticket count depend on ftix?
          
          

NOTES
          - This function is only needed, because not all functional tickets are calculated
            and to give a best guess result, all jobs in one category with no ftix get the
            same amount of ftix.


20.6 free_fcategories

NAME
          free_fcategories() -- frees all fcategories and their job lists.

SYNOPSIS
          void free_fcategories(sge_fcategory_t **fcategories)

FUNCTION
          frees all fcategories and their job lists.

INPUTS
          sge_fcategory_t **fcategories /- pointer to a pointer of the first fcategory
          sge_ref_list_t **ref_array - memory for internal structures, allocated with
          build_functional_categories. Needs to be freed as well.

NOTES
          - it does not delete the sge_ref_t structures, which are stored in
            in the job lists.


20.7 recompute_prio

NAME
          recompute_prio() -- Recompute JAT prio based on changed ticket amount

SYNOPSIS
          static void recompute_prio(sge_task_ref_t *tref, lListElem *task, double
          nurg)

FUNCTION
          Each time when the ticket amount for in a JAT_Type element is changed
          the JAT_prio needs to be updated. The new ticket value is normalized
          and the priorty value is computed.

INPUTS
          sge_task_ref_t *tref - The tref element that is related to the ticket change
          lListElem *task      - The JAT_Type task element.
          double nurg          - The normalized urgency assumed for the job.
          double npri          - The normalized POSIX priority assumed for the job.


20.8 sge_build_sgeee_orders

NAME
          sge_build_sgeee_orders() -- build orders for updating qmaster

SYNOPSIS
          void sge_build_sgeee_orders(sge_Sdescr_t *lists, lList *running_jobs,
          lList *queued_jobs, lList *finished_jobs, order_t *orders, int
          update_usage_and_configuration, int seqno)

FUNCTION
          Builds generates the orderlist for sending the scheduling decisions
          to the qmaster. The following orders are generated:
          - running job tickets
          - pending job tickets
          - delete order for finished jobs
          - update user usage order
          - update project usage order
          - update share tree order
          - update scheduler configuration order
          -  orders updating user/project resource usage (ORT_update_project_usage)
          -  orders updating running tickets needed for dynamic repriorization (ORT_ticket)
          Most orders are generated by using the sge_create_orders function.

INPUTS
          sge_Sdescr_t *lists                 - ???
          lList *running_jobs                 - list of running jobs
          lList *queued_jobs                  - list of queued jobs (should be sorted by ticktes)
          lList *finished_jobs                - list of finished jobs
          order_t *orders                     - existing order list (new orders will be added to it
          bool update_usage_and_configuration - if true, the update usage orders are generated
          int seqno                           - a seqno, changed with each scheduling run
          bool max_queued_ticket_orders       - if true, pending tickets are submited to the
                                                qmaster
          bool updated_execd                  - if true, the queue information is send with
                                                the running job tickets

RESULT
          void


20.9 sge_do_sgeee_priority

NAME
          sge_do_sgeee_priority() -- determine GEEE priority for a list of jobs

SYNOPSIS
          static void sge_do_sgeee_priority(lList *job_list, double min_tix, double
          max_tix)

FUNCTION
          Determines for a list of jobs the GEEE priority. Prior
          sge_do_sgeee_priority() can be called the normalized urgency value must
          already be known for each job. The ticket range passed is used for
          normalizing ticket amount.

INPUTS
          lList *job_list - The job list
          double min_tix  - Minumum ticket amount
          double max_tix  - Maximum ticket amount
          bool do_nprio   - Needs norm. priority be determined
          bool do_nurg    - Needs norm. urgency be determined

NOTES
          MT-NOTE: sge_do_sgeee_priority() is MT safe


20.10 sgeee_priority

NAME
          sgeee_priority() -- Compute final GE priority

SYNOPSIS
          static void sgeee_priority(lListElem *task, u_long32 jobid, double nsu,
          double min_tix, double max_tix)

FUNCTION
          The GE priority is computed for the task based on the already known
          ticket amount and already normalized urgency value. The ticket amount
          is normalized based on the ticket range passed. The weights for
          ticket and urgency value are applied.

INPUTS
          lListElem *task - The task whose priority is computed
          u_long32 jobid  - The jobs id
          double nsu      - The normalized urgency value that applies to all
                            tasks of the job.
          double min_tix  - minimum ticket amount
          double max_tix  - maximum ticket amount

NOTES
          MT-NOTE: sgeee_priority() is MT safe


20.11 sgeee_resort_pending_jobs

NAME
          sgeee_resort_pending_jobs() -- Resort pending jobs after assignment

SYNOPSIS
          void sgeee_resort_pending_jobs(lList **job_list, lList *orderlist)

FUNCTION
          Update pending jobs order upon assignement and change ticket amounts
          in orders previously created.
          If we dispatch a job sub-task and the job has more sub-tasks, then
          the job is still first in the job list.
          We need to remove and reinsert the job back into the sorted job
          list in case another job is higher priority (i.e. has more tickets)
          Additionally it is neccessary to update the number of pending tickets
          for the following pending array task. (The next task will get less
          tickets than the current one)

INPUTS
          lList **job_list - The pending job list. The first job in the list was
                             assigned right before.


20.12 sgeee_scheduler

NAME
          sgeee_scheduler() -- calc tickets, send orders, and sort job list

SYNOPSIS
          int sgeee_scheduler(sge_Sdescr_t *lists, lList *running_jobs, lList
          *finished_jobs, lList *pending_jobs, lList **orderlist)

FUNCTION
           - calculates the running and pending job tickets.
           - send the orders to the qmaster about the job tickets
           - order the pending job list according the the job tickets
          
          On a "normal" scheduling interval:
                - calculate tickets for new and running jobs
                - don't decay and sum usage
                - don't update qmaster
          
          On a scheduling interval:
                - calculate tickets for new and running jobs
                - decay and sum usage
                - handle finished jobs
                - update qmaster

INPUTS
          sge_Sdescr_t *lists  - a ref to all lists in this scheduler
          lList *running_jobs  - a list of all running jobs
          lList *finished_jobs -  a list of all finished jobs
          lList *pending_jobs  -  a list of all pending jobs
          lList **orderlist    -  the order list

RESULT
          int - 0 if everthing went fine, -1 if not


20.13 tix_range_get

NAME
          tix_range_get() -- Get stored ticket range.

SYNOPSIS
          static void tix_range_get(double *min_tix, double *max_tix)

FUNCTION
          Get stored ticket range from global variables.

INPUTS
          double *min_tix - Target for minimum value.
          double *max_tix - Target for maximum value.

NOTES
          MT-NOTES: tix_range_get() is not MT safe


20.14 tix_range_set

NAME
          tix_range_set() -- Store ticket range.

SYNOPSIS
          static void tix_range_set(double min_tix, double max_tix)

FUNCTION
          Stores ticket range in the global variables.

INPUTS
          double min_tix - Minimum ticket value.
          double max_tix - Maximum ticket value.

NOTES
          MT-NOTES: tix_range_set() is not MT safe


21 valid_queue_user


21.1 sge_ar_queue_have_users_access

NAME
          sge_ar_queue_have_users_access() -- verify that all users of an AR have queue
                                             access

SYNOPSIS
          bool sge_ar_queue_have_users_access(lList **alpp, lListElem *ar, lListElem
          *queue, lList *master_userset_list)

FUNCTION
          Iterates over the AR_acl_list and proves that every entry has queue access.
          If only one has no access the function returns false

INPUTS
          lList **alpp               - answer list
          lListElem *ar              - advance reservation object (AR_Type)
          lListElem *queue           - queue instance object (QU_Type)
          lList *master_userset_list - master userset list

RESULT
          bool - true if all have access
                 false if only one has no access

NOTES
          MT-NOTE: sge_ar_queue_have_users_access() is MT safe


Function Index

Table of Contents