loop_over_grid.loop_over_grid_numba =================================== .. py:function:: loop_over_grid.loop_over_grid_numba(row_amount, col_amount, grid, dt, met_data, t_steps_per_day, toggle_dict, parallel=False, use_mpi=False, ncores='all', dask_scheduler=None, client=None) This function wraps timestep_loop, allowing for it to be run in parallel over an arbitrarily sized grid. The grid is flattened, so for an NxN grid you don't need a multiple of N processors in order to use all the available cores. If model_setup.use_numba is True, then the pure Python loop_over_grid is overloaded with this function. :Parameters: * **row_amount** (*Number of rows in .*) * **col_amount** (*Number of columns in .t_steps_per_day*) * **grid** (*numba.typed.List*) -- Nested list containing the instances of the IceShelf class for each x and y point. Vertical (z) information is stored within each class instance. * **met_data** (*numpy structured array*) -- Grid containing the met data associated with the model grid. * **t_steps_per_day** (*int*) -- Number of timesteps to run each day. * **toggle_dict** (*dict*) -- Dictionary of toggle switches to be fed into MONARCHS, that determine certain things about the model (such as whether to run certain physical processes). * **parallel, use_mpi** -- Dummy arguments so that we can overload the regular loop_over_grid with this Numba implementation, since these are needed there. * **ncores** -- Number of cores to use. Default 'all', in which case it will use numba.config.NUMBA_DEFAULT_NUM_THREADS threads (i.e. all of them that Numba can detect on the system). * **dask_scheduler, client** (*Dummy arguments for compatibility*) :returns: No need to reshape flat_grid back into np.shape(grid), as each element of flat_grid is a pointer to each element of grid, i.e. operating on flat_grid changes the corresponding element of grid :rtype: None. The function amends the instance of passed to it.