# fermipy package¶

## fermipy.config module¶

class fermipy.config.ConfigManager[source]

Bases: object

static create(configfile)[source]

Create a configuration dictionary from a yaml config file. This function will first populate the dictionary with defaults taken from pre-defined configuration files. The configuration dictionary is then updated with the user-defined configuration file. Any settings defined by the user will take precedence over the default settings.

static load(path)[source]
class fermipy.config.Configurable(config, **kwargs)[source]

Bases: object

The base class provides common facilities like loading and saving configuration state.

config

Return the configuration dictionary of this class.

configdir
configure(config, **kwargs)[source]
classmethod get_config()[source]

Return a default configuration dictionary for this class.

print_config(logger, loglevel=None)[source]
write_config(outfile)[source]

Write the configuration dictionary to an output file.

fermipy.config.cast_config(config, defaults)[source]
fermipy.config.create_default_config(defaults)[source]

Create a configuration dictionary from a defaults dictionary. The defaults dictionary defines valid configuration keys with default values and docstrings. Each dictionary element should be a tuple or list containing (default value,docstring,type).

fermipy.config.validate_config(config, defaults, block=u'root')[source]

## fermipy.defaults module¶

fermipy.defaults.make_default_dict(d)[source]

## fermipy.gtanalysis module¶

class fermipy.gtanalysis.GTAnalysis(config, **kwargs)[source]

High-level analysis interface that manages a set of analysis component objects. Most of the functionality of the Fermipy package is provided through the methods of this class. The class constructor accepts a dictionary that defines the configuration for the analysis. Keyword arguments to the constructor can be used to override parameters in the configuration dictionary.

__delattr__

x.__delattr__(‘name’) <==> del x.name

__format__()

default object formatter

__getattribute__

x.__getattribute__(‘name’) <==> x.name

__hash__
__reduce__()

helper for pickle

__reduce_ex__()

helper for pickle

__repr__
__setattr__

x.__setattr__(‘name’, value) <==> x.name = value

__sizeof__() → int

size of object in memory, in bytes

__str__
add_gauss_prior(name, parName, mean, sigma)[source]
add_source(name, src_dict, free=False, init_source=True, save_source_maps=True, **kwargs)[source]

Add a source to the ROI model. This function may be called either before or after setup.

Parameters: name (str) – Source name. src_dict (dict or Source object) – Dictionary or source object defining the source properties (coordinates, spectral parameters, etc.). free (bool) – Initialize the source with a free normalization parameter.
add_sources_from_roi(names, roi, free=False, **kwargs)[source]

Add multiple sources to the current ROI model copied from another ROI model.

Parameters: names (list) – List of str source names to add. roi (ROIModel object) – The roi model from which to add sources. free (bool) – Initialize the source with a free normalization paramter.
bowtie(name, fd=None, energies=None)[source]

Generate a spectral uncertainty band (bowtie) for the given source. This will create an uncertainty band on the differential flux as a function of energy by propagating the errors on the global fit parameters. Note that this band only reflects the uncertainty for parameters that are currently free in the model.

Parameters: name (str) – Source name. fd (FluxDensity) – Flux density object. If this parameter is None then one will be created. energies (array-like) – Sequence of energies at which the flux band will be evaluated.
cleanup()[source]
components

Return the list of analysis components.

config

Return the configuration dictionary of this class.

configdir
configure(config, **kwargs)
constrain_norms(srcNames, cov_scale=1.0)[source]

Constrain the normalizations of one or more sources by adding gaussian priors with sigma equal to the parameter error times a scaling factor.

counts_map()[source]

Return a Map representation of the counts map.

Returns: map Map
static create(infile, config=None)[source]

Create a new instance of GTAnalysis from an analysis output file generated with write_roi. By default the new instance will inherit the configuration of the saved analysis instance. The configuration may be overriden by passing a configuration file path with the config argument.

Parameters: infile (str) – Path to the ROI results file. config (str) – Path to a configuration file. This will override the configuration in the ROI results file.
defaults = {u'sourcefind': {u'max_iter': (3, u'Set the number of search iterations.', <type 'int'>), u'min_separation': (1.0, u'Set the minimum separation in deg for sources added in each iteration.', <type 'float'>), u'tsmap_fitter': (u'tsmap', u'Set the method for generating the TS map.', <type 'str'>), u'sqrt_ts_threshold': (5.0, u'Set the threshold on sqrt(TS).', <type 'float'>), u'model': (None, u'Set the source model dictionary. By default the test source will be a PointSource with an Index 2 power-law specturm.', <type 'dict'>), u'sources_per_iter': (3, u'', <type 'int'>)}, u'roiopt': {u'npred_frac': (0.95, u'', <type 'float'>), u'shape_ts_threshold': (25.0, u'Threshold on source TS used for determining the sources that will be fit in the third optimization step.', <type 'float'>), u'npred_threshold': (1.0, u'', <type 'float'>), u'max_free_sources': (5, u'Maximum number of sources that will be fit simultaneously in the first optimization step.', <type 'int'>)}, u'selection': {u'radius': (None, u'Radius of data selection. If none this will be automatically set from the ROI size.', <type 'float'>), u'tmin': (None, u'Minimum time (MET).', <type 'int'>), u'target': (None, u'Choose an object on which to center the ROI. This option takes precendence over ra/dec or glon/glat.', <type 'str'>), u'glon': (None, u'', <type 'float'>), u'emin': (None, u'Minimum Energy (MeV)', <type 'float'>), u'emax': (None, u'Maximum Energy (MeV)', <type 'float'>), u'tmax': (None, u'Maximum time (MET).', <type 'int'>), u'glat': (None, u'', <type 'float'>), u'filter': (None, u'Filter string for gtmktime selection.', <type 'str'>), u'logemax': (None, u'Maximum Energy (log10(MeV))', <type 'float'>), u'ra': (None, u'', <type 'float'>), u'evtype': (None, u'Event type selection.', <type 'int'>), u'evclass': (None, u'Event class selection.', <type 'int'>), u'zmax': (None, u'Maximum zenith angle.', <type 'float'>), u'logemin': (None, u'Minimum Energy (log10(MeV))', <type 'float'>), u'dec': (None, u'', <type 'float'>), u'roicut': (u'no', u'', <type 'str'>), u'convtype': (None, u'Conversion type selection.', <type 'int'>)}, u'logging': {u'verbosity': (3, u'', <type 'int'>), u'chatter': (3, u'Set the chatter parameter of the STs.', <type 'int'>)}, u'tsmap': {u'multithread': (False, u'', <type 'bool'>), u'model': (None, u'Dictionary defining the properties of the test source.', <type 'dict'>), u'erange': (None, u'Lower and upper energy bounds in log10(E/MeV). By default the calculation will be performed over the full analysis energy range.', <type 'list'>), u'max_kernel_radius': (3.0, u'', <type 'float'>)}, u'mc': {u'seed': (None, u'', <type 'int'>)}, u'components': (None, u'', <type 'list'>), u'localize': {u'dtheta_max': (0.3, u'Half-width of the search region in degrees used for the first pass of the localization search.', <type 'float'>), u'nstep': (5, u'Number of steps along each spatial dimension in the refined likelihood scan.', <type 'int'>), u'fix_background': (True, u'Fix background parameters when fitting the source flux in each energy bin.', <type 'bool'>), u'update': (False, u'Update the source model with the best-fit position.', <type 'bool'>)}, u'binning': {u'projtype': (u'WCS', u'Projection mode (WCS or HPX).', <type 'str'>), u'binsperdec': (8, u'Number of energy bins per decade.', <type 'float'>), u'enumbins': (None, u'Number of energy bins. If none this will be inferred from energy range and binsperdec parameter.', <type 'int'>), u'roiwidth': (10.0, u'Width of the ROI in degrees. The number of pixels in each spatial dimension will be set from roiwidth / binsz (rounded up).', <type 'float'>), u'hpx_ebin': (True, u'Include energy binning', <type 'bool'>), u'binsz': (0.1, u'Spatial bin size in degrees.', <type 'float'>), u'npix': (None, u'Number of pixels. If none then this will be set from roiwidth and binsz.', <type 'int'>), u'hpx_order': (10, u'Order of the map (int between 0 and 12, included)', <type 'int'>), u'proj': (u'AIT', u'Spatial projection for WCS mode.', <type 'str'>), u'coordsys': (u'CEL', u'Coordinate system of the spatial projection (CEL or GAL).', <type 'str'>), u'hpx_ordering_scheme': (u'RING', u'HEALPix Ordering Scheme', <type 'str'>)}, u'extension': {u'save_model_map': (False, u'', <type 'bool'>), u'width': (None, u'Parameter vector for scan over spatial extent. If none then the parameter vector will be set from width_min, width_max, and width_nstep.', <type 'str'>), u'fix_background': (False, u'Fix any background parameters that are currently free in the model when performing the likelihood scan over extension.', <type 'bool'>), u'save_templates': (False, u'', <type 'bool'>), u'width_max': (1.0, u'Maximum value in degrees for the likelihood scan over spatial extent.', <type 'float'>), u'width_min': (0.01, u'Minimum value in degrees for the likelihood scan over spatial extent.', <type 'float'>), u'spatial_model': (u'GaussianSource', u'Spatial model use for extension test.', <type 'str'>), u'update': (False, u'Update the source model with the best-fit spatial extension.', <type 'bool'>), u'width_nstep': (21, u'Number of steps for the spatial likelihood scan.', <type 'int'>)}, u'sed': {u'use_local_index': (False, u'Use a power-law approximation to the shape of the global spectrum in each bin. If this is false then a constant index set to bin_index will be used.', <type 'bool'>), u'bin_index': (2.0, u'Spectral index that will be use when fitting the energy distribution within an energy bin.', <type 'float'>), u'cov_scale': (3.0, u'', <type 'float'>), u'fix_background': (True, u'Fix background parameters when fitting the source flux in each energy bin.', <type 'bool'>), u'ul_confidence': (0.95, u'Confidence level for upper limit calculation.', <type 'float'>)}, u'fileio': {u'workdir': (None, u'Override the working directory.', <type 'str'>), u'savefits': (True, u'Save intermediate FITS files.', <type 'bool'>), u'scratchdir': (u'/scratch', u'Path to the scratch directory.', <type 'str'>), u'outdir': (None, u'Path of the output directory. If none this will default to the directory containing the configuration file.', <type 'str'>), u'logfile': (None, u'Path to log file. If None then log will be written to fermipy.log.', <type 'str'>), u'usescratch': (False, u'Run analysis in a temporary directory under scratchdir.', <type 'bool'>)}, u'gtlike': {u'irfs': (None, u'Set the IRF string.', <type 'str'>), u'minbinsz': (0.05, u'Set the minimum bin size used for resampling diffuse maps.', <type 'float'>), u'bexpmap': (None, u'', <type 'str'>), u'edisp': (True, u'Enable the correction for energy dispersion.', <type 'bool'>), u'srcmap': (None, u'', <type 'str'>), u'resample': (True, u'', <type 'bool'>), u'llscan_npts': (20, u'Number of evaluation points to use when performing a likelihood scan.', <type 'int'>), u'convolve': (True, u'', <type 'bool'>), u'rfactor': (2, u'', <type 'int'>), u'edisp_disable': (None, u'Provide a list of sources for which the edisp correction should be disabled.', <type 'list'>)}, u'residmap': {u'model': (None, u'Dictionary defining the properties of the test source. By default the test source will be a PointSource with an Index 2 power-law specturm.', <type 'dict'>), u'erange': (None, u'Lower and upper energy bounds in log10(E/MeV). By default the calculation will be performed over the full analysis energy range.', <type 'list'>)}, u'optimizer': {u'retries': (3, u'Set the number of times to retry the fit when the fit quality is less than min_fit_quality.', <type 'int'>), u'verbosity': (0, u'', <type 'int'>), u'optimizer': (u'MINUIT', u'Set the optimization algorithm to use when maximizing the likelihood function.', <type 'str'>), u'min_fit_quality': (3, u'Set the minimum fit quality.', <type 'int'>), u'tol': (0.0001, u'Set the optimizer tolerance.', <type 'float'>)}, u'model': {u'catalogs': (None, u'', <type 'list'>), u'limbdiff': (None, u'', <type 'list'>), u'src_radius_roi': (None, u'Half-width of src_roiwidth selection. This parameter can be used in lieu of src_roiwidth.', <type 'float'>), u'extdir': (None, u'Set a directory that will be searched for extended source FITS templates. Template files in this directory will take precendence over catalog source templates with the same name.', <type 'str'>), u'sources': (None, u'', <type 'list'>), u'assoc_xmatch_columns': ([u'3FGL_Name'], u'Choose a set of association columns on which to cross-match catalogs.', <type 'list'>), u'diffuse': (None, u'', <type 'list'>), u'src_roiwidth': (None, u'Width of square selection cut for inclusion of catalog sources in the model. Includes sources within a square region with side src_roiwidth centered on the ROI. If this parameter is none then no selection is applied. This selection will be ORed with the src_radius selection.', <type 'float'>), u'isodiff': (None, u'Set the isotropic template.', <type 'list'>), u'merge_sources': (True, u'Merge properties of sources that appear in multiple source catalogs. If merge_sources=false then subsequent sources with the same name will be ignored.', <type 'bool'>), u'extract_diffuse': (False, u'Extract a copy of all mapcube components centered on the ROI.', <type 'bool'>), u'src_radius': (None, u'Radius of circular selection cut for inclusion of catalog sources in the model. Includes sources within a circle of this radius centered on the ROI. If this parameter is none then no selection is applied. This selection will be ORed with the src_roiwidth selection.', <type 'float'>), u'galdiff': (None, u'Set the galactic IEM mapcube.', <type 'list'>)}, u'data': {u'evfile': (None, u'Path to FT1 file or list of FT1 files.', <type 'str'>), u'cacheft1': (True, u'Cache FT1 files when performing binned analysis. If false then only the counts cube is retained.', <type 'bool'>), u'scfile': (None, u'Path to FT2 (spacecraft) file.', <type 'str'>), u'ltcube': (None, u'Path to livetime cube. If none a livetime cube will be generated with gtmktime.', <type 'str'>)}, u'plotting': {u'catalogs': (None, u'', <type 'list'>), u'format': (u'png', u'', <type 'str'>), u'erange': (None, u'', <type 'list'>), u'graticule_radii': (None, u'Define a list of radii at which circular graticules will be drawn.', <type 'list'>), u'cmap': (u'ds9_b', u'Set the colormap for 2D plots.', <type 'str'>), u'label_ts_threshold': (0.0, u'TS threshold for labeling sources in sky maps. If None then no sources will be labeled.', <type 'float'>)}, u'tscube': {u'do_sed': (True, u'Compute the energy bin-by-bin fits', <type 'bool'>), u'remake_test_source': (False, u'If true, recomputes the test source image (otherwise just shifts it)', <type 'bool'>), u'st_scan_level': (0, u'Level to which to do ST-based fitting (for testing)', <type 'int'>), u'cov_scale': (-1.0, u'Scale factor to apply to broadband fitting cov. matrix in bin-by-bin fits ( < 0 -> fixed ) ', <type 'float'>), u'max_iter': (30, u'Maximum number of iterations for the Newtons method fitter.', <type 'int'>), u'nnorm': (10, u'Number of points in the likelihood v. normalization scan', <type 'int'>), u'norm_sigma': (5.0, u'Number of sigma to use for the scan range ', <type 'float'>), u'tol_type': (0, u'Absoulte (0) or relative (1) criteria for convergence.', <type 'int'>), u'cov_scale_bb': (-1.0, u'Scale factor to apply to global fitting cov. matrix in broadband fits. ( < 0 -> no prior ) ', <type 'float'>), u'tol': (0.001, u'Critetia for fit convergence (estimated vertical distance to min < tol )', <type 'float'>), u'model': (None, u'Dictionary defining the properties of the test source. By default the test source will be a PointSource with an Index 2 power-law specturm.', <type 'dict'>)}}
delete_source(name, save_template=True, delete_source_map=False, build_fixed_wts=True, **kwargs)[source]

Delete a source from the ROI model.

Parameters: name (str) – Source name. save_template (bool) – Delete the SpatialMap FITS template associated with this source. delete_source_map (bool) – Delete the source map associated with this source from the source maps file. src – The deleted source object. Model
delete_sources(cuts=None, distance=None, minmax_ts=None, minmax_npred=None, square=False, exclude_diffuse=True)[source]

Delete sources in the ROI model satisfying the given selection criteria.

Returns: srcs – A list of Model objects. list
energies

Return the energy bin edges.

enumbins

Return the number of energy bins.

erange
extension(name, **kwargs)[source]

Test this source for spatial extension with the likelihood ratio method (TS_ext). This method will substitute an extended spatial model for the given source and perform a one-dimensional scan of the spatial extension parameter over the range specified with the width parameters. The 1-D profile likelihood is then used to compute the best-fit value, upper limit, and TS for extension. Any background parameters that are free will also be simultaneously profiled in the likelihood scan.

Parameters: name (str) – Source name. spatial_model (str) – Spatial model that will be used when testing extension (e.g. DiskSource, GaussianSource). width_min (float) – Minimum value in degrees for the spatial extension scan. width_max (float) – Maximum value in degrees for the spatial extension scan. width_nstep (int) – Number of scan points between width_min and width_max. Scan points will be spaced evenly on a logarithmic scale between log(width_min) and log(width_max). width (array-like) – Sequence of values in degrees for the spatial extension scan. If this argument is None then the scan points will be determined from width_min/width_max/width_nstep. fix_background (bool) – Fix all background sources when performing the extension fit. update (bool) – Update this source with the best-fit model for spatial extension. save_model_map (bool) – Save model maps for all steps in the likelihood scan. extension – Dictionary containing results of the extension analysis. The same dictionary is also saved to the dictionary of this source under ‘extension’. dict
find_sources(prefix=u'', **kwargs)

An iterative source-finding algorithm.

Parameters: model (dict) – Dictionary defining the properties of the test source. This is the model that will be used for generating TS maps. sqrt_ts_threshold (float) – Source threshold in sqrt(TS). Only peaks with sqrt(TS) exceeding this threshold will be used as seeds for new sources. min_separation (float) – Minimum separation in degrees of sources detected in each iteration. The source finder will look for the maximum peak in the TS map within a circular region of this radius. max_iter (int) – Maximum number of source finding iterations. The source finder will continue adding sources until no additional peaks are found or the number of iterations exceeds this number. sources_per_iter (int) – Maximum number of sources that will be added in each iteration. If the number of detected peaks in a given iteration is larger than this number, only the N peaks with the largest TS will be used as seeds for the current iteration. tsmap_fitter (str) – Set the method used internally for generating TS maps. Valid options: tsmap tscube tsmap (dict) – Keyword arguments dictionary for tsmap method. tscube (dict) – Keyword arguments dictionary for tscube method. peaks (list) – List of peak objects. sources (list) – List of source objects.
fit(update=True, **kwargs)[source]

Run the likelihood optimization. This will execute a fit of all parameters that are currently free in the model and update the charateristics of the corresponding model components (TS, npred, etc.). The fit will be repeated N times (set with the retries parameter) until a fit quality greater than or equal to min_fit_quality is obtained. If the requested fit quality is not obtained then all parameter values will be reverted to their state prior to the execution of the fit.

Parameters: update (bool) – Do not update the ROI model. tol (float) – Set the optimizer tolerance. verbosity (int) – Set the optimizer output level. optimizer (str) – Set the likelihood optimizer (e.g. MINUIT or NEWMINUIT). retries (int) – Set the number of times to rerun the fit when the fit quality is < 3. min_fit_quality (int) – Set the minimum fit quality. If the fit quality is smaller than this value then all model parameters will be restored to their values prior to the fit. reoptimize (bool) – Refit background sources when updating source properties (TS and likelihood profiles). fit – Dictionary containing diagnostic information from the fit (fit quality, parameter covariances, etc.). dict
fit_correlation()[source]
free_index(name, free=True)[source]

Free/Fix index of a source.

Parameters: name (str) – Source name. free (bool) – Choose whether to free (free=True) or fix (free=False).
free_norm(name, free=True)[source]

Free/Fix normalization of a source.

Parameters: name (str) – Source name. free (bool) – Choose whether to free (free=True) or fix (free=False).
free_parameter(name, par, free=True)[source]
free_shape(name, free=True)[source]

Free/Fix shape parameters of a source.

Parameters: name (str) – Source name. free (bool) – Choose whether to free (free=True) or fix (free=False).
free_source(name, free=True, pars=None)[source]

Free/Fix parameters of a source.

Parameters: name (str) – Source name. free (bool) – Choose whether to free (free=True) or fix (free=False) source parameters. pars (list) – Set a list of parameters to be freed/fixed for this source. If none then all source parameters will be freed/fixed with the exception of those defined in the skip_pars list.
free_sources(free=True, pars=None, cuts=None, distance=None, minmax_ts=None, minmax_npred=None, square=False, exclude_diffuse=False)[source]

Free or fix sources in the ROI model satisfying the given selection. When multiple selections are defined, the selected sources will be those satisfying the logical AND of all selections (e.g. distance < X && minmax_ts[0] < ts < minmax_ts[1] && ...).

Parameters: free (bool) – Choose whether to free (free=True) or fix (free=False) source parameters. pars (list) – Set a list of parameters to be freed/fixed for this source. If none then all source parameters will be freed/fixed. If pars=’norm’ then only normalization parameters will be freed. cuts (dict) – Dictionary of [min,max] selections on source properties. distance (float) – Distance out to which sources should be freed or fixed. If this parameter is none no selection will be applied. minmax_ts (list) – Free sources that have TS in the range [min,max]. If either min or max are None then only a lower (upper) bound will be applied. If this parameter is none no selection will be applied. minmax_npred (list) – Free sources that have npred in the range [min,max]. If either min or max are None then only a lower (upper) bound will be applied. If this parameter is none no selection will be applied. square (bool) – Switch between applying a circular or square (ROI-like) selection on the maximum projected distance from the ROI center. exclude_diffuse (bool) – Exclude diffuse sources. srcs – A list of Model objects. list
free_sources_by_position(free=True, pars=None, distance=None, square=False)[source]

Free/Fix all sources within a certain distance of the given sky coordinate. By default it will use the ROI center.

Parameters: free (bool) – Choose whether to free (free=True) or fix (free=False) source parameters. pars (list) – Set a list of parameters to be freed/fixed for this source. If none then all source parameters will be freed/fixed. If pars=’norm’ then only normalization parameters will be freed. distance (float) – Distance in degrees out to which sources should be freed or fixed. If none then all sources will be selected. square (bool) – Apply a square (ROI-like) selection on the maximum distance in either X or Y in projected cartesian coordinates. srcs – A list of Source objects. list
generate_model(model_name=None)[source]

Generate model maps for all components. model_name should be a unique identifier for the model. If model_name is None then the model maps will be generated using the current parameters of the ROI.

get_config()

Return a default configuration dictionary for this class.

get_free_param_vector()[source]
get_free_source_params(name)[source]
get_params(freeonly=False)[source]
get_source_dfde(name)[source]

Return differential flux distribution of a source. For sources with FileFunction spectral type this returns the internal differential flux array.

Returns: loge (ndarray) – Array of energies at which the differential flux is evaluated (log10(E/MeV)). dfde (ndarray) – Array of differential flux values (cm^{-2} s^{-1} MeV^{-1}) evaluated at energies in loge.
get_source_name(name)[source]

Return the name of a source as it is defined in the pyLikelihood model object.

get_sources(cuts=None, distance=None, minmax_ts=None, minmax_npred=None, square=False)[source]

Retrieve list of sources in the ROI satisfying the given selections.

Returns: srcs – A list of Model objects. list
get_src_model(name, paramsonly=False, reoptimize=False, npts=None)[source]

Compose a dictionary for a source with the current best-fit parameters.

Parameters: name (str) – paramsonly (bool) – reoptimize (bool) – Re-fit background parameters in likelihood scan. npts (int) – Number of points for likelihood scan.
like

Return the global likelihood object.

load_roi(infile, reload_sources=False)[source]

This function reloads the analysis state from a previously saved instance generated with write_roi.

Parameters: infile (str) – reload_sources (bool) – Regenerate source maps for non-diffuse sources.
load_xml(xmlfile)[source]

Parameters: xmlfile (str) – Name of the input XML file.
localize(name, **kwargs)

Find the best-fit position of a source. Localization is performed in two steps. First a TS map is computed centered on the source with half-width set by dtheta_max. A fit is then performed to the maximum TS peak in this map. The source position is then further refined by scanning the likelihood in the vicinity of the peak found in the first step. The size of the scan region is set to encompass the 99% positional uncertainty contour as determined from the peak fit.

Parameters: name (str) – Source name. dtheta_max (float) – Maximum offset in RA/DEC in deg from the nominal source position that will be used to define the boundaries of the TS map search region. nstep (int) – Number of steps in longitude/latitude that will be taken when refining the source position. The bounds of the scan range are set to the 99% positional uncertainty as determined from the TS map peak fit. The total number of sampling points will be nstep**2. fix_background (bool) – Fix background parameters when fitting the source position. update (bool) – Update the model for this source with the best-fit position. If newname=None this will overwrite the existing source map of this source with one corresponding to its new location. newname (str) – Name that will be assigned to the relocalized source when update=True. If newname is None then the existing source name will be used. localize – Dictionary containing results of the localization analysis. This dictionary is also saved to the dictionary of this source in ‘localize’. dict
make_plots(prefix, mcube_map=None, **kwargs)[source]

Make diagnostic plots using the current ROI model.

model_counts_map(name=None, exclude=None)[source]

Return the model counts map for a single source, a list of sources, or for the sum of all sources in the ROI. The exclude parameter can be used to exclude one or more components when generating the model map.

Parameters: name (str or list of str) – Parameter controlling the set of sources for which the model counts map will be calculated. If name=None the model map will be generated for all sources in the ROI. exclude (str or list of str) – List of sources that will be excluded when calculating the model map. map Map
model_counts_spectrum(name, emin=None, emax=None, summed=False)[source]

Return the predicted number of model counts versus energy for a given source and energy range. If summed=True return the counts spectrum summed over all components otherwise return a list of model spectra.

npix

Return the number of energy bins.

optimize(**kwargs)[source]

Iteratively optimize the ROI model. The optimization is performed in three sequential steps:

• Free the normalization of the N largest components (as determined from NPred) that contain a fraction npred_frac of the total predicted counts in the model and perform a simultaneous fit of the normalization parameters of these components.
• Individually fit the normalizations of all sources that were not included in the first step in order of their npred values. Skip any sources that have NPred < npred_threshold.
• Individually fit the shape and normalization parameters of all sources with TS > shape_ts_threshold where TS is determined from the first two steps of the ROI optimization.

To ensure that the model is fully optimized this method can be run multiple times.

Parameters: npred_frac (float) – Threshold on the fractional number of counts in the N largest components in the ROI. This parameter determines the set of sources that are fit in the first optimization step. npred_threshold (float) – Threshold on the minimum number of counts of individual sources. This parameter determines the sources that are fit in the second optimization step. shape_ts_threshold (float) – Threshold on source TS used for determining the sources that will be fit in the third optimization step. max_free_sources (int) – Maximum number of sources that will be fit simultaneously in the first optimization step.
outdir

Return the analysis output directory.

print_config(logger, loglevel=None)
print_model()[source]
print_params(allpars=False)[source]

Print information about the model parameters (values, errors, bounds, scale).

print_roi()[source]
profile(name, parName, emin=None, emax=None, reoptimize=False, xvals=None, npts=None, savestate=True)[source]

Profile the likelihood for the given source and parameter.

Parameters: name (str) – Source name. parName (str) – Parameter name. reoptimize (bool) – Re-fit nuisance parameters at each step in the scan. Note that this will only re-fit parameters that were free when the method was executed. lnlprofile – Dictionary containing results of likelihood scan. dict
profile_norm(name, emin=None, emax=None, reoptimize=False, xvals=None, npts=20, fix_shape=True, savestate=True)[source]

Profile the normalization of a source.

Parameters: name (str) – Source name. reoptimize (bool) – Re-optimize free parameters in the model at each point in the profile likelihood scan.
projtype

Return the type of projection to use

reload_source(name)[source]

Delete and reload a source in the model. This will refresh the spatial model of this source to the one defined in the XML model.

remove_prior(name, parName)[source]
remove_priors()[source]

Clear all priors.

residmap(prefix=u'', **kwargs)

Generate 2-D spatial residual maps using the current ROI model and the convolution kernel defined with the model argument.

Parameters: prefix (str) – String that will be prefixed to the output residual map files. model (dict) – Dictionary defining the properties of the convolution kernel. exclude (str or list of str) – Source or sources that will be removed from the model when computing the residual map. erange (list) – Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied. make_plots (bool) – Write image files. write_fits (bool) – Write FITS files. maps – A dictionary containing the Map objects for the residual significance and amplitude. dict
roi

Return the ROI object.

scale_parameter(name, par, scale)[source]
sed(name, profile=True, energies=None, **kwargs)

Generate a spectral energy distribution (SED) for a source. This function will fit the normalization of the source in each energy bin. By default the SED will be generated with the analysis energy bins but a custom binning can be defined with the energies parameter.

Parameters: name (str) – Source name. prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). profile (bool) – Profile the likelihood in each energy bin. energies (ndarray) – Sequence of energies in log10(E/MeV) defining the edges of the energy bins. If this argument is None then the analysis energy bins will be used. The energies in this sequence must align with the bin edges of the underyling analysis instance. bin_index (float) – Spectral index that will be use when fitting the energy distribution within an energy bin. use_local_index (bool) – Use a power-law approximation to the shape of the global spectrum in each bin. If this is false then a constant index set to bin_index will be used. fix_background (bool) – Fix background components when fitting the flux normalization in each energy bin. If fix_background=False then all background parameters that are currently free in the fit will be profiled. By default fix_background=True. ul_confidence (float) – Set the confidence level that will be used for the calculation of flux upper limits in each energy bin. cov_scale (float) – Scaling factor that will be applied when setting the gaussian prior on the normalization of free background sources. If this parameter is None then no gaussian prior will be applied. write_fits (bool) – write_npy (bool) – sed – Dictionary containing output of the SED analysis. This dictionary is also saved to the ‘sed’ dictionary of the Source instance. dict
set_edisp_flag(name, flag=True)[source]

Enable or disable the energy dispersion correction for the given source.

set_energy_range(emin, emax)[source]

Set the energy bounds of the analysis. This restricts the evaluation of the likelihood to the data that falls in this range. Input values will be rounded to the closest bin edge value. If either argument is None then the lower or upper bound of the analysis instance will be used.

Parameters: emin (float) – Lower energy bound in log10(E/MeV). emax (float) – Upper energy bound in log10(E/MeV). eminmax – Minimum and maximum energy. array
set_free_param_vector(free)[source]
set_log_level(level)[source]
set_norm(name, value, update_source=True)[source]
set_norm_scale(name, value)[source]
set_parameter(name, par, value, true_value=True, scale=None, bounds=None, update_source=True)[source]

Update the value of a parameter. Parameter bounds will automatically be adjusted to encompass the new parameter value.

Parameters: name (str) – Source name. par (str) – Parameter name. value (float) – Parameter value. By default this argument should be the unscaled (True) parameter value. scale (float) – Parameter scale (optional). Value argument is interpreted with respect to the scale parameter if it is provided. update_source (bool) – Update the source dictionary for the object.
set_parameter_bounds(name, par, bounds)[source]

Set the bounds of a parameter.

Parameters: name (str) – Source name. par (str) – Parameter name. bounds (list) – Upper and lower bound.
set_parameter_scale(name, par, scale)[source]

Update the scale of a parameter while keeping its value constant.

set_source_dfde(name, dfde, update_source=True)[source]

Set the differential flux distribution of a source with the FileFunction spectral type.

Parameters: name (str) – Source name. dfde (ndarray) – Array of differential flux values (cm^{-2} s^{-1} MeV^{-1}).
set_source_spectrum(name, spectrum_type=u'PowerLaw', spectrum_pars=None, update_source=True)[source]

Set the spectral model of a source. This function can be used to change the spectral type of a source or modify its spectral parameters. If called with spectrum_type=’FileFunction’ and spectrum_pars=None, the source spectrum will be replaced with a FileFunction with the same differential flux distribution as the original spectrum.

Parameters: name (str) – Source name. spectrum_type (str) – Spectrum type (PowerLaw, etc.). spectrum_pars (dict) – Dictionary of spectral parameters (optional). update_source (bool) – Recompute all source characteristics (flux, TS, NPred) using the new spectral model of the source.
setup(init_sources=True, overwrite=False)[source]

Run pre-processing for each analysis component and construct a joint likelihood object. This function performs the following tasks: data selection (gtselect, gtmktime), data binning (gtbin), and model generation (gtexpcube2,gtsrcmaps).

Parameters: init_sources (bool) – Choose whether to compute properties (flux, TS, etc.) for individual sources. overwrite (bool) – Run all pre-processing steps even if the output file of that step is present in the working directory. By default this function will skip any steps for which the output file already exists.
simulate_roi(name=None, randomize=True, restore=False)[source]

Generate a simulation of the ROI using the current best-fit model and replace the data counts cube with this simulation. The simulation is created by generating an array of Poisson random numbers with expectation values drawn from the model cube of the binned analysis instance. This function will update the counts cube both in memory and in the source map file. The counts cube can be restored to its original state by calling this method with restore = True.

Parameters: name (str) – Name of the model component to be simulated. If None then the whole ROI will be simulated. restore (bool) – Restore the data counts cube to its original state.
simulate_source(src_dict=None)[source]

Inject simulated source counts into the data.

Parameters: src_dict (dict) – Dictionary defining the spatial and spectral properties of the source that will be injected.
stage_input()[source]

Copy data products to intermediate working directory.

stage_output()[source]

Copy data products to final output directory.

tscube(prefix=u'', **kwargs)

Generate a spatial TS map for a source component with properties defined by the model argument. This method uses the gttscube ST application for source fitting and will simultaneously fit the test source normalization as well as the normalizations of any background components that are currently free. The output of this method is a dictionary containing Map objects with the TS and amplitude of the best-fit test source. By default this method will also save maps to FITS files and render them as image files.

Parameters: prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). model (dict) – Dictionary defining the properties of the test source. do_sed (bool) – Compute the energy bin-by-bin fits. nnorm (int) – Number of points in the likelihood v. normalization scan. norm_sigma (float) – Number of sigma to use for the scan range. tol (float) – Critetia for fit convergence (estimated vertical distance to min < tol ). tol_type (int) – Absoulte (0) or relative (1) criteria for convergence. max_iter (int) – Maximum number of iterations for the Newton’s method fitter remake_test_source (bool) – If true, recomputes the test source image (otherwise just shifts it) st_scan_level (int) – make_plots (bool) – Write image files. write_fits (bool) – Write a FITS file with the results of the analysis. maps – A dictionary containing the Map objects for TS and source amplitude. dict
tsmap(prefix=u'', **kwargs)

Generate a spatial TS map for a source component with properties defined by the model argument. The TS map will have the same geometry as the ROI. The output of this method is a dictionary containing Map objects with the TS and amplitude of the best-fit test source. By default this method will also save maps to FITS files and render them as image files.

This method uses a simplified likelihood fitting implementation that only fits for the normalization of the test source. Before running this method it is recommended to first optimize the ROI model (e.g. by running optimize()).

Parameters: prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). model (dict) – Dictionary defining the properties of the test source. exclude (str or list of str) – Source or sources that will be removed from the model when computing the TS map. erange (list) – Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied. max_kernel_radius (float) – Set the maximum radius of the test source kernel. Using a smaller value will speed up the TS calculation at the loss of accuracy. The default value is 3 degrees. make_plots (bool) – Write image files. write_fits (bool) – Write a FITS file. write_npy (bool) – Write a numpy file. maps – A dictionary containing the Map objects for TS and source amplitude. dict
unzero_source(name)[source]
update_source(name, paramsonly=False, reoptimize=False)[source]

Update the dictionary for this source.

Parameters: name (str) – paramsonly (bool) – reoptimize (bool) – Re-fit background parameters in likelihood scan.
workdir

Return the analysis working directory.

write_config(outfile)

Write the configuration dictionary to an output file.

write_model_map(model_name, name=None)[source]

Save the counts model map to a FITS file.

Parameters: model_name (str) – String that will be append to the name of the output file. name (str) – Name of the component.
write_roi(outfile=None, make_residuals=False, save_model_map=True, format=None, **kwargs)[source]

Write current model to a file. This function will write an XML model file and an ROI dictionary in both YAML and npy formats.

Parameters: outfile (str) – Name of the output file. The extension of this string will be stripped when generating the XML, YAML and Numpy filenames. make_plots (bool) – Generate diagnostic plots. make_residuals (bool) – Run residual analysis. save_model_map (bool) – Save the current counts model to a FITS file. format (str) – Set the output file format (yaml or npy).
write_xml(xmlfile)[source]

Save current model definition as XML file.

Parameters: xmlfile (str) – Name of the output XML file.
zero_source(name)[source]

## fermipy.logger module¶

class fermipy.logger.Logger[source]

Bases: object

This class provides helper functions which facilitate creating instances of the built-in logger class.

static get(name, logfile, loglevel=10)[source]
static setup(config=None, logfile=None)[source]

This method sets up the default configuration of the logger. Once this method is called all subsequent instances Logger instances will inherit this configuration.

class fermipy.logger.StreamLogger(name='stdout', logfile=None, quiet=True)[source]

Bases: object

File-like object to log stdout/stderr using the logging module.

close()[source]
flush()[source]
write(msg, level=10)[source]
fermipy.logger.logLevel(level)[source]

This is a function that returns a python like level from a HEASOFT like level.

## fermipy.roi_model module¶

class fermipy.roi_model.IsoSource(name, data)[source]
diffuse
filefunction
write_xml(root)[source]
class fermipy.roi_model.MapCubeSource(name, data)[source]
diffuse
mapcube
write_xml(root)[source]
class fermipy.roi_model.Model(name, data=None)[source]

Bases: object

Base class for source objects. This class is a container for both spectral and spatial parameters as well as other source properties such as TS, Npred, and location within the ROI.

add_name(name)[source]
assoc
check_cuts(cuts)[source]
static create_from_dict(src_dict, roi_skydir=None)[source]
data
get_catalog_dict()[source]
get_norm()[source]
items()[source]
name
names
params
set_name(name, names=None)[source]
set_spectral_pars(spectral_pars)[source]
spatial_pars
spectral_pars
update_data(d)[source]
update_from_source(src)[source]
update_spectral_pars(spectral_pars)[source]
class fermipy.roi_model.ROIModel(config=None, **kwargs)[source]

This class is responsible for managing the ROI model (both sources and diffuse components). Source catalogs can be read from either FITS or XML files. Individual components are represented by instances of Model and can be accessed by name using the bracket operator.

• Create an ROI with all 3FGL sources and print a summary of its contents:
>>> skydir = astropy.coordinates.SkyCoord(0.0,0.0,unit='deg')
>>> roi = ROIModel({'catalogs' : ['3FGL'],'src_roiwidth' : 10.0},skydir=skydir)
>>> print roi
name                SpatialModel   SpectrumType     offset        ts       npred
--------------------------------------------------------------------------------
3FGL J2357.3-0150   PointSource    PowerLaw          1.956       nan         0.0
3FGL J0006.2+0135   PointSource    PowerLaw          2.232       nan         0.0
3FGL J0016.3-0013   PointSource    PowerLaw          4.084       nan         0.0
3FGL J0014.3-0455   PointSource    PowerLaw          6.085       nan         0.0

• Print a summary of an individual source
>>> print roi['3FGL J0006.2+0135']

• Get the SkyCoord for a source
>>> dir = roi['SourceA'].skydir

• Loop over all sources and print their names
>>> for s in roi.sources: print s.name

clear()[source]

Clear the contents of the ROI.

copy_source(name)[source]
static create(selection, config, **kwargs)[source]

Create an ROIModel instance.

static create_from_position(skydir, config, **kwargs)[source]

Create an ROIModel instance centered on a sky direction.

Parameters: skydir (SkyCoord) – Sky direction on which the ROI will be centered. config (dict) – Model configuration dictionary.
static create_from_roi_data(datafile)[source]

Create an ROI model.

static create_from_source(name, config, **kwargs)[source]

Create an ROI centered on the given source.

static create_roi_from_ft1(ft1file, config)[source]

Create an ROI model by extracting the sources coordinates form an FT1 file.

create_source(name, src_dict, build_index=True, merge_sources=True)[source]

Add a new source to the ROI model from a dictionary or an existing source object.

Parameters: name (str) – src_dict (dict or Source) – src Source
defaults = {'logfile': (None, u'', <type 'str'>), u'catalogs': (None, u'', <type 'list'>), u'src_roiwidth': (None, u'Width of square selection cut for inclusion of catalog sources in the model. Includes sources within a square region with side src_roiwidth centered on the ROI. If this parameter is none then no selection is applied. This selection will be ORed with the src_radius selection.', <type 'float'>), u'limbdiff': (None, u'', <type 'list'>), u'src_radius_roi': (None, u'Half-width of src_roiwidth selection. This parameter can be used in lieu of src_roiwidth.', <type 'float'>), u'extract_diffuse': (False, u'Extract a copy of all mapcube components centered on the ROI.', <type 'bool'>), u'galdiff': (None, u'Set the galactic IEM mapcube.', <type 'list'>), u'extdir': (None, u'Set a directory that will be searched for extended source FITS templates. Template files in this directory will take precendence over catalog source templates with the same name.', <type 'str'>), u'sources': (None, u'', <type 'list'>), 'fileio': {u'workdir': (None, u'Override the working directory.', <type 'str'>), u'savefits': (True, u'Save intermediate FITS files.', <type 'bool'>), u'scratchdir': (u'/scratch', u'Path to the scratch directory.', <type 'str'>), u'outdir': (None, u'Path of the output directory. If none this will default to the directory containing the configuration file.', <type 'str'>), u'logfile': (None, u'Path to log file. If None then log will be written to fermipy.log.', <type 'str'>), u'usescratch': (False, u'Run analysis in a temporary directory under scratchdir.', <type 'bool'>)}, 'logging': {u'verbosity': (3, u'', <type 'int'>), u'chatter': (3, u'Set the chatter parameter of the STs.', <type 'int'>)}, u'isodiff': (None, u'Set the isotropic template.', <type 'list'>), u'merge_sources': (True, u'Merge properties of sources that appear in multiple source catalogs. If merge_sources=false then subsequent sources with the same name will be ignored.', <type 'bool'>), u'assoc_xmatch_columns': ([u'3FGL_Name'], u'Choose a set of association columns on which to cross-match catalogs.', <type 'list'>), u'diffuse': (None, u'', <type 'list'>), u'src_radius': (None, u'Radius of circular selection cut for inclusion of catalog sources in the model. Includes sources within a circle of this radius centered on the ROI. If this parameter is none then no selection is applied. This selection will be ORed with the src_roiwidth selection.', <type 'float'>)}
delete_sources(srcs)[source]
diffuse_sources
get_nearby_sources(name, dist, min_dist=None, square=False)[source]
get_source_by_name(name)[source]

Return a single source in the ROI with the given name. The input name string can match any of the strings in the names property of the source object. Case and whitespace are ignored when matching name strings. If no sources are found or multiple sources then an exception is thrown.

Parameters: name (str) – Name string. srcs – A source object. Model
get_sources(cuts=None, distance=None, minmax_ts=None, minmax_npred=None, square=False, exclude_diffuse=False, coordsys=u'CEL')[source]

Retrieve list of sources satisfying the given selections.

Returns: srcs – List of source objects. list
get_sources_by_name(name)[source]

Return a list of sources in the ROI matching the given name. The input name string can match any of the strings in the names property of the source object. Case and whitespace are ignored when matching name strings.

Parameters: name (str) – srcs – A list of Model objects. list
get_sources_by_position(skydir, dist, min_dist=None, square=False, coordsys=u'CEL')[source]

Retrieve sources within a certain angular distance of a sky coordinate. This function supports two types of geometric selections: circular (square=False) and square (square=True). The circular selection finds all sources with a given angular distance of the target position. The square selection finds sources within an ROI-like region of size R x R where R = 2 x dist.

Parameters: skydir (SkyCoord) – Sky direction with respect to which the selection will be applied. dist (float) – Maximum distance in degrees from the sky coordinate. square (bool) – Choose whether to apply a circular or square selection. coordsys (str) – Coordinate system to use when applying a selection with square=True.
get_sources_by_property(pname, pmin, pmax=None)[source]
has_source(name)[source]
load(**kwargs)[source]

Load both point source and diffuse components.

load_diffuse_srcs()[source]
load_fits_catalog(name, **kwargs)[source]

Load sources from a FITS catalog file.

Parameters: name (str) – Catalog name or path to a catalog FITS file.
load_source(src, build_index=True, merge_sources=True, **kwargs)[source]

Parameters: src (Source) – Source object that will be added to the ROI. merge_sources (bool) – When a source matches an existing source in the model update that source with the properties of the new source. build_index (bool) – Re-make the source index after loading this source.
load_sources(sources)[source]

Delete all sources in the ROI and load the input source list.

load_xml(xmlfile, **kwargs)[source]

Load sources from an XML file.

match_source(src)[source]

Look for source or sources in the model that match the given source. Sources are matched by name and any association columns defined in the assoc_xmatch_columns parameter.

point_sources
skydir

Return the sky direction objection corresponding to the center of the ROI.

sources
src_name_cols = [u'Source_Name', u'ASSOC', u'ASSOC1', u'ASSOC2', u'ASSOC_GAM', u'1FHL_Name', u'2FGL_Name', u'3FGL_Name', u'ASSOC_GAM1', u'ASSOC_GAM2', u'ASSOC_TEV']
write_fits(fitsfile)[source]

Write the ROI model to a FITS file.

write_xml(xmlfile)[source]

Save the ROI model as an XML file.

class fermipy.roi_model.Source(name, data, radec=None)[source]

Class representation of a source (non-diffuse) model component. A source object serves as a container for the properties of that source (position, spatial/spectral parameters, TS, etc.) as derived in the current analysis. Most properties of a source object can be accessed with the bracket operator:

# Return the TS of this source >>> print src[‘ts’]

# Get a skycoord representation of the source position >>> print src.skydir

associations
static create_from_dict(src_dict, roi_skydir=None)[source]

Create a source object from a python dictionary.

Parameters: src_dict (dict) – Dictionary defining the properties of the source.
static create_from_xml(root, extdir=None)[source]

Create a Source object from an XML node.

data
diffuse
extended
load_from_catalog()[source]

Load spectral parameters from catalog values.

radec
separation(src)[source]
set_position(skydir)[source]

Set the position of the source.

Parameters: skydir (SkyCoord) –
set_roi_direction(roidir)[source]
set_spatial_model(spatial_model, spatial_width=None)[source]
skydir

Return a SkyCoord representation of the source position.

Returns: skydir SkyCoord
update_data(d)[source]
write_xml(root)[source]

Write this source to an XML node.

fermipy.roi_model.get_dist_to_edge(skydir, lon, lat, width, coordsys=u'CEL')[source]
fermipy.roi_model.get_linear_dist(skydir, lon, lat, coordsys=u'CEL')[source]
fermipy.roi_model.get_params_dict(pars_dict)[source]
fermipy.roi_model.get_skydir_distance_mask(src_skydir, skydir, dist, min_dist=None, square=False, coordsys=u'CEL')[source]

Retrieve sources within a certain angular distance of an (ra,dec) coordinate. This function supports two types of geometric selections: circular (square=False) and square (square=True). The circular selection finds all sources with a given angular distance of the target position. The square selection finds sources within an ROI-like region of size R x R where R = 2 x dist.

Parameters: src_skydir (SkyCoord) – Array of sky directions. skydir (SkyCoord) – Sky direction with respect to which the selection will be applied. dist (float) – Maximum distance in degrees from the sky coordinate. square (bool) – Choose whether to apply a circular or square selection. coordsys (str) – Coordinate system to use when applying a selection with square=True.
fermipy.roi_model.resolve_file_path(path, **kwargs)[source]

## fermipy.utils module¶

fermipy.utils.apply_minmax_selection(val, val_minmax)[source]
fermipy.utils.cl_to_dlnl(cl)[source]

Compute the delta-log-likehood corresponding to an upper limit of the given confidence level.

fermipy.utils.convolve2d_disk(fn, r, sig, nstep=200)[source]

Evaluate the convolution f’(r) = f(r) * g(r) where f(r) is azimuthally symmetric function in two dimensions and g is a step function given by:

g(r) = H(1-r/s)

Parameters: fn (function) – Input function that takes a single radial coordinate parameter. r (ndarray) – Array of points at which the convolution is to be evaluated. sig (float) – Radius parameter of the step function. nstep (int) – Number of sampling point for numeric integration.
fermipy.utils.convolve2d_gauss(fn, r, sig, nstep=200)[source]

Evaluate the convolution f’(r) = f(r) * g(r) where f(r) is azimuthally symmetric function in two dimensions and g is a gaussian given by:

g(r) = 1/(2*pi*s^2) Exp[-r^2/(2*s^2)]

Parameters: fn (function) – Input function that takes a single radial coordinate parameter. r (ndarray) – Array of points at which the convolution is to be evaluated. sig (float) – Width parameter of the gaussian. nstep (int) – Number of sampling point for numeric integration.
fermipy.utils.create_hpx_disk_region_string(skyDir, coordsys, radius, inclusive=0)[source]
fermipy.utils.create_model_name(src)[source]

Generate a name for a source object given its spatial/spectral properties.

Parameters: src (Source) – A source object. name – A source name. str
fermipy.utils.create_source_name(skydir)[source]
fermipy.utils.create_xml_element(root, name, attrib)[source]
fermipy.utils.edge_to_center(edges)[source]
fermipy.utils.edge_to_width(edges)[source]
fermipy.utils.eq2gal(ra, dec)[source]
fermipy.utils.extend_array(edges, binsz, lo, hi)[source]

Extend an array to encompass lo and hi values.

fermipy.utils.extract_mapcube_region(infile, skydir, outfile, maphdu=0)[source]

Extract a region out of an all-sky mapcube file.

Parameters: infile (str) – Path to mapcube file. skydir (SkyCoord) –
fermipy.utils.find_function_root(fn, x0, xb, delta=0.0)[source]

Find the root of a function: f(x)+delta in the interval encompassed by x0 and xb.

Parameters: fn (function) – Python function. x0 (float) – Fixed bound for the root search. This will either be used as the lower or upper bound depending on the relative value of xb. xb (float) – Upper or lower bound for the root search. If a root is not found in the interval [x0,xb]/[xb,x0] this value will be increased/decreased until a change in sign is found.
fermipy.utils.fit_parabola(z, ix, iy, dpix=2, zmin=None)[source]
fermipy.utils.fits_recarray_to_dict(table)[source]

Convert a FITS recarray to a python dictionary.

fermipy.utils.format_filename(outdir, basename, prefix=None, extension=None)[source]
fermipy.utils.gal2eq(l, b)[source]
fermipy.utils.get_parameter_limits(xval, logLike, ul_confidence=0.95)[source]

Compute upper/lower limits, peak position, and 1-sigma errors from a 1-D likelihood function.

Parameters: xval (ndarray) – Array of parameter values. logLike (ndarray) – Array of log-likelihood values. ul_confidence (float) – Confidence level to use for limit calculation.
fermipy.utils.interpolate_function_min(x, y)[source]
fermipy.utils.join_strings(strings, sep=u'_')[source]
fermipy.utils.load_data(infile, workdir=None)[source]

Load python data structure from either a YAML or numpy file.

fermipy.utils.load_npy(infile)[source]
fermipy.utils.load_xml_elements(root, path)[source]
fermipy.utils.load_yaml(infile, **kwargs)[source]
fermipy.utils.lonlat_to_xyz(lon, lat)[source]
fermipy.utils.make_cdisk_kernel(psf, sigma, npix, cdelt, xpix, ypix, normalize=False)[source]

Make a kernel for a PSF-convolved 2D disk.

Parameters: psf (PSFModel) – sigma (float) – 68% containment radius in degrees.
fermipy.utils.make_cgauss_kernel(psf, sigma, npix, cdelt, xpix, ypix, normalize=False)[source]

Make a kernel for a PSF-convolved 2D gaussian.

Parameters: psf (PSFModel) – sigma (float) – 68% containment radius in degrees.
fermipy.utils.make_disk_kernel(sigma, npix=501, cdelt=0.01, xpix=0.0, ypix=0.0)[source]

Make kernel for a 2D disk.

Parameters: sigma (float) – Disk radius in deg.
fermipy.utils.make_gaussian_kernel(sigma, npix=501, cdelt=0.01, xpix=0.0, ypix=0.0)[source]

Make kernel for a 2D gaussian.

Parameters: sigma (float) – 68% containment radius in degrees.
fermipy.utils.make_pixel_offset(npix, xpix=0.0, ypix=0.0)[source]

Make a 2D array with the distance of each pixel from a reference direction in pixel coordinates. Pixel coordinates are defined such that (0,0) is located at the center of the coordinate grid.

fermipy.utils.make_psf_kernel(psf, npix, cdelt, xpix, ypix, normalize=False)[source]

Generate a kernel for a point-source.

Parameters: psf (PSFModel) – npix (int) – Number of pixels in X and Y dimensions. cdelt (float) – Pixel size in degrees.
fermipy.utils.merge_dict(d0, d1, add_new_keys=False, append_arrays=False)[source]

Recursively merge the contents of python dictionary d0 with the contents of another python dictionary, d1.

Parameters: d0 (dict) – The input dictionary. d1 (dict) – Dictionary to be merged with the input dictionary. add_new_keys (str) – Do not skip keys that only exist in d1. append_arrays (bool) – If an element is a numpy array set the value of that element by concatenating the two arrays.
fermipy.utils.mkdir(dir)[source]
fermipy.utils.parabola((x, y), amplitude, x0, y0, sx, sy, theta)[source]
fermipy.utils.poly_to_parabola(coeff)[source]
fermipy.utils.prettify_xml(elem)[source]

Return a pretty-printed XML string for the Element.

fermipy.utils.project(lon0, lat0, lon1, lat1)[source]

This function performs a stereographic projection on the unit vector (lon1,lat1) with the pole defined at the reference unit vector (lon0,lat0).

fermipy.utils.rebin_map(k, nebin, npix, rebin)[source]
fermipy.utils.resolve_path(path, workdir=None)[source]
fermipy.utils.scale_parameter(p)[source]
fermipy.utils.strip_suffix(filename, suffix)[source]
fermipy.utils.tolist(x)[source]

convenience function that takes in a nested structure of lists and dictionaries and converts everything to its base objects. This is useful for dupming a file to yaml.

1. numpy arrays into python lists

>>> type(tolist(np.asarray(123))) == int
True
>>> tolist(np.asarray([1,2,3])) == [1,2,3]
True

2. numpy strings into python strings.

>>> tolist([np.asarray('cat')])==['cat']
True

3. an ordered dict to a dict

>>> ordered=OrderedDict(a=1, b=2)
>>> type(tolist(ordered)) == dict
True

4. converts unicode to regular strings

>>> type(u'a') == str
False
>>> type(tolist(u'a')) == str
True

5. converts numbers & bools in strings to real represntation, (i.e. ‘123’ -> 123)

>>> type(tolist(np.asarray('123'))) == int
True
>>> type(tolist('123')) == int
True
>>> tolist('False') == False
True

fermipy.utils.unicode_representer(dumper, uni)[source]
fermipy.utils.unicode_to_str(args)[source]
fermipy.utils.update_keys(input_dict, key_map)[source]
fermipy.utils.val_to_bin(edges, x)[source]

Convert axis coordinate to bin index.

fermipy.utils.val_to_bin_bounded(edges, x)[source]

Convert axis coordinate to bin index.

fermipy.utils.val_to_edge(edges, x)[source]

Convert axis coordinate to bin index.

fermipy.utils.write_fits_image(data, wcs, outfile)[source]
fermipy.utils.write_hpx_image(data, hpx, outfile, extname=u'SKYMAP')[source]
fermipy.utils.write_yaml(o, outfile, **kwargs)[source]
fermipy.utils.xyz_to_lonlat(*args)[source]

## fermipy.sed module¶

Utilities for dealing with SEDs

Many parts of this code are taken from dsphs/like/lnlfn.py by
Matthew Wood <mdwood@slac.stanford.edu> Alex Drlica-Wagner <kadrlica@slac.stanford.edu>
class fermipy.sed.CastroData(norm_vals, nll_vals, specData, norm_type)[source]

Bases: object

This class wraps the data needed to make a “Castro” plot, namely the log-likelihood as a function of normalization for a series of energy bins.

TS_spectrum(spec_vals)[source]

Calculate and the TS for a given set of spectral values

__call__(x)[source]

Return the negative log-likelihood for an array of values, summed over the energy bins

Parameters: x (ndarray) – Array of nEbins x M values nll_val – Array of negative log-likelihood values. ndarray
__getitem__(i)[source]

return the LnLFn object for the ith energy bin

static create_from_fits(fitsfile, norm_type=u'FLUX', hdu_scan=u'SCANDATA', hdu_energies=u'EBOUNDS')[source]
create_functor(specType, scale=1000.0)[source]

Create a functor object that computes normalizations in a sequence of energy bins for a given spectral model.

derivative(x, der=1)[source]

Return the derivate of the log-like summed over the energy bins

Parameters: x (ndarray) – Array of nEbins x M values der (int) – Order of the derivate der_val – Array of negative log-likelihood values. ndarray
fitNorm_v2(specVals)[source]

Fit the normalization given a set of spectral values that define a spectral shape

This version uses scipy.optimize.fmin

Parameters: specVals (an array of (nebin values that define a spectral shape) – xlims (fit limits) – the best-fit normalization value (returns) –
fitNormalization(specVals, xlims)[source]

Fit the normalization given a set of spectral values that define a spectral shape

This version is faster, and solves for the root of the derivatvie

Parameters: specVals (an array of (nebin values that define a spectral shape) – xlims (fit limits) – the best-fit normalization value (returns) –
fit_spectrum(specFunc, initPars)[source]

Fit for the free parameters of a spectral function

Parameters: specFunc (The Spectral Function) – initPars (The initial values of the parameters) – result (tuple) – The output of scipy.optimize.fmin spec_out (ndarray) – The best-fit spectral values TS_spec (float) – The TS of the best-fit spectrum
fn_mles()[source]

returns the summed likelihood at the maximum likelihood estimate

Note that simply sums the maximum likelihood values at each bin, and does not impose any sort of constrain between bins

getLimits(alpha, upper=True)[source]

Evaluate the limits corresponding to a C.L. of (1-alpha)%.

Parameters: alpha (limit confidence level.) – upper (upper or lower limits.) – an array of values, one for each energy bin (returns) –
mles()[source]

return the maximum likelihood estimates for each of the energy bins

nll_null

Return the negative log-likelihood for the null-hypothesis

norm_type

Return the normalization type flag

specData

Return the Spectral Data object

spectrum_loglike(specType, params, scale=1000.0)[source]
test_spectra(spec_types=[u'PowerLaw', u'LogParabola', u'PLExpCutoff'])[source]
ts_vals()[source]

returns test statistic values for each energy bin

class fermipy.sed.Interpolator(x, y)[source]

Bases: object

Helper class for interpolating a 1-D function from a set of tabulated values.

Safely deals with overflows and underflows

__call__(x)[source]

Return the interpolated values for an array of inputs

x : the inputs

Note that if any x value is outside the interpolation ranges this will return a linear extrapolation based on the slope at the endpoint

derivative(x, der=1)[source]

return the derivative a an array of input values

x : the inputs der : the order of derivative

x

return the x values used to construct the split

xmax

return the maximum value over which the spline is defined

xmin

return the minimum value over which the spline is defined

y

return the y values used to construct the split

class fermipy.sed.LnLFn(x, y, norm_type=0)[source]

Bases: object

Helper class for interpolating a 1-D log-likelihood function from a set of tabulated values.

TS()[source]

return the Test Statistic

fn_mle()[source]

return the function value at the maximum likelihood estimate

getInterval(alpha)[source]

Evaluate the interval corresponding to a C.L. of (1-alpha)%.

Parameters: alpha (limit confidence level.) –
getLimit(alpha, upper=True)[source]

Evaluate the limits corresponding to a C.L. of (1-alpha)%.

Parameters: alpha (limit confidence level.) – upper (upper or lower limits.) –
interp

return the underlying Interpolator object

mle()[source]

return the maximum likelihood estimate

This will return the cached value, if it exists

norm_type

return a code specifying the quantity used for the flux

0: Normalization w.r.t. to test source 1: Flux of the test source ( ph cm^-2 s^-1 ) 2: Energy Flux of the test source ( MeV cm^-2 s^-1 ) 3: Number of predicted photons 4: Differential flux of the test source ( ph cm^-2 s^-1 MeV^-1 ) 5: Differential energy flux of the test source ( MeV cm^-2 s^-1 MeV^-1 )

class fermipy.sed.SEDGenerator[source]

Bases: object

Mixin class that provides SED functionality to GTAnalysis.

sed(name, profile=True, energies=None, **kwargs)[source]

Generate a spectral energy distribution (SED) for a source. This function will fit the normalization of the source in each energy bin. By default the SED will be generated with the analysis energy bins but a custom binning can be defined with the energies parameter.

Parameters: name (str) – Source name. prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). profile (bool) – Profile the likelihood in each energy bin. energies (ndarray) – Sequence of energies in log10(E/MeV) defining the edges of the energy bins. If this argument is None then the analysis energy bins will be used. The energies in this sequence must align with the bin edges of the underyling analysis instance. bin_index (float) – Spectral index that will be use when fitting the energy distribution within an energy bin. use_local_index (bool) – Use a power-law approximation to the shape of the global spectrum in each bin. If this is false then a constant index set to bin_index will be used. fix_background (bool) – Fix background components when fitting the flux normalization in each energy bin. If fix_background=False then all background parameters that are currently free in the fit will be profiled. By default fix_background=True. ul_confidence (float) – Set the confidence level that will be used for the calculation of flux upper limits in each energy bin. cov_scale (float) – Scaling factor that will be applied when setting the gaussian prior on the normalization of free background sources. If this parameter is None then no gaussian prior will be applied. write_fits (bool) – write_npy (bool) – sed – Dictionary containing output of the SED analysis. This dictionary is also saved to the ‘sed’ dictionary of the Source instance. dict
class fermipy.sed.SpecData(emin, emax, dfde, flux, eflux, npred)[source]

Bases: object

This class wraps spectral data, e.g., energy bin definitions, flux values and number of predicted photons

bin_widths

return the energy bin widths

dfde

return the differential flux values

ebins

return the energy bin edges

eflux

return the energy flux values

emax

return the lower energy bin edges

emin

return the lower energy bin edges

evals

return the energy centers

log_ebins

return the log10 of the energy bin edges

nE

return the number of energy bins

npred

return the number of predicted events

class fermipy.sed.TSCube(tsmap, normmap, tscube, norm_vals, nll_vals, specData, norm_type)[source]

Bases: object

castroData_from_ipix(ipix, colwise=False)[source]

Build a CastroData object for a particular pixel

castroData_from_pix_xy(xy, colwise=False)[source]

Build a CastroData object for a particular pixel

static create_from_fits(fitsfile, norm_type=u'FLUX')[source]

Build a TSCube object from a fits file created by gttscube

Parameters: fitsfile (str) – Path to the tscube FITS file. norm_type (str) – String specifying the quantity used for the normalization
find_and_refine_peaks(threshold, min_separation=1.0, use_cumul=False)[source]
find_sources(threshold, min_separation=1.0, use_cumul=False, output_peaks=False, output_castro=False, output_specInfo=False, output_src_dicts=False, output_srcs=False)[source]
nE

return the number of energy bins

nN

return the number of sample points in each energy bin

normmap

return the Map of the Best-fit normalization value

specData

Return the Spectral Data object

test_spectra_of_peak(peak, spec_types=[u'PowerLaw', u'LogParabola', u'PLExpCutoff'])[source]
ts_cumul

return the Map of the cumulative TestStatistic value per pixel (summed over energy bin)

tscube

return the Cube of the TestStatistic value per pixel / energy bin

tsmap

return the Map of the TestStatistic value

fermipy.sed.build_source_dict(src_name, peak_dict, spec_dict, spec_type)[source]

## fermipy.sourcefind module¶

class fermipy.sourcefind.SourceFinder[source]

Bases: object

Mixin class which provides source-finding functionality to GTAnalysis.

find_sources(prefix=u'', **kwargs)[source]

An iterative source-finding algorithm.

Parameters: model (dict) – Dictionary defining the properties of the test source. This is the model that will be used for generating TS maps. sqrt_ts_threshold (float) – Source threshold in sqrt(TS). Only peaks with sqrt(TS) exceeding this threshold will be used as seeds for new sources. min_separation (float) – Minimum separation in degrees of sources detected in each iteration. The source finder will look for the maximum peak in the TS map within a circular region of this radius. max_iter (int) – Maximum number of source finding iterations. The source finder will continue adding sources until no additional peaks are found or the number of iterations exceeds this number. sources_per_iter (int) – Maximum number of sources that will be added in each iteration. If the number of detected peaks in a given iteration is larger than this number, only the N peaks with the largest TS will be used as seeds for the current iteration. tsmap_fitter (str) – Set the method used internally for generating TS maps. Valid options: tsmap tscube tsmap (dict) – Keyword arguments dictionary for tsmap method. tscube (dict) – Keyword arguments dictionary for tscube method. peaks (list) – List of peak objects. sources (list) – List of source objects.
localize(name, **kwargs)[source]

Find the best-fit position of a source. Localization is performed in two steps. First a TS map is computed centered on the source with half-width set by dtheta_max. A fit is then performed to the maximum TS peak in this map. The source position is then further refined by scanning the likelihood in the vicinity of the peak found in the first step. The size of the scan region is set to encompass the 99% positional uncertainty contour as determined from the peak fit.

Parameters: name (str) – Source name. dtheta_max (float) – Maximum offset in RA/DEC in deg from the nominal source position that will be used to define the boundaries of the TS map search region. nstep (int) – Number of steps in longitude/latitude that will be taken when refining the source position. The bounds of the scan range are set to the 99% positional uncertainty as determined from the TS map peak fit. The total number of sampling points will be nstep**2. fix_background (bool) – Fix background parameters when fitting the source position. update (bool) – Update the model for this source with the best-fit position. If newname=None this will overwrite the existing source map of this source with one corresponding to its new location. newname (str) – Name that will be assigned to the relocalized source when update=True. If newname is None then the existing source name will be used. localize – Dictionary containing results of the localization analysis. This dictionary is also saved to the dictionary of this source in ‘localize’. dict
fermipy.sourcefind.estimate_pos_and_err_parabolic(tsvals)[source]
Solve for the position and uncertainty of source in one dimension
assuming that you are near the maximum and the errors are parabolic
Parameters: tsvals (ndarray) – The TS values at the maximum TS, and for each pixel on either side The position and uncertainty of the source, in pixel units w.r.t. the center of the maximum pixel
fermipy.sourcefind.find_peaks(input_map, threshold, min_separation=0.5)[source]

Find peaks in a 2-D map object that have amplitude larger than threshold and lie a distance at least min_separation from another peak of larger amplitude. The implementation of this method uses maximum_filter.

Parameters: input_map (Map) – threshold (float) – min_separation (float) – Radius of region size in degrees. Sets the minimum allowable separation between peaks. peaks – List of dictionaries containing the location and amplitude of each peak. list
fermipy.sourcefind.fit_error_ellipse(tsmap, xy=None, dpix=3)[source]

Fit a positional uncertainty ellipse from a TS map.

Parameters: tsmap (Map) – xy (tuple) –
fermipy.sourcefind.refine_peak(tsmap, pix)[source]

Solve for the position and uncertainty of source assuming that you are near the maximum and the errors are parabolic

Parameters: tsmap (ndarray) – Array with the TS data. The position and uncertainty of the source, in pixel units w.r.t. the center of the maximum pixel

## fermipy.skymap module¶

class fermipy.skymap.HpxMap(counts, hpx)[source]

Representation of a 2D or 3D counts map using HEALPix.

convert_to_cached_wcs(hpx_in, sum_ebins=False, normalize=True)[source]

Make a WCS object and convert HEALPix data into WCS projection

hpx_in : HEALPix input data sum_ebins : bool, sum energy bins over energy bins before reprojecting normalize : True -> perserve integral by splitting HEALPix values between bins

returns (WCS object, np.ndarray() with reprojected data)

static create_from_hdu(hdu, ebins)[source]

Creates and returns an HpxMap object from a FITS HDU.

hdu : The FITS ebins : Energy bin edges [optional]

static create_from_hdulist(hdulist, extname=u'SKYMAP', ebounds=u'EBOUNDS')[source]

Creates and returns an HpxMap object from a FITS HDUList

extname : The name of the HDU with the map data ebounds : The name of the HDU with the energy bin data

hpx
make_wcs_from_hpx(sum_ebins=False, proj=u'CAR', oversample=2, normalize=True)[source]

Make a WCS object and convert HEALPix data into WCS projection

sum_ebins : bool, sum energy bins over energy bins before reprojecting proj : WCS-projection oversample : Oversampling factor for WCS map normalize : True -> perserve integral by splitting HEALPix values between bins

returns (WCS object, np.ndarray() with reprojected data)

NOTE: this re-calculates the mapping, if you have already calculated the

mapping it is much faster to use convert_to_cached_wcs() instead

class fermipy.skymap.Map(counts, wcs)[source]

Representation of a 2D or 3D counts map using WCS.

static create(skydir, cdelt, npix, coordsys=u'CEL', projection=u'AIT')[source]
static create_from_fits(fitsfile, **kwargs)[source]
static create_from_hdu(hdu, wcs)[source]
create_image_hdu(name=None)[source]
create_primary_hdu()[source]
get_map_values(lons, lats)[source]

Return the indices in the flat array corresponding to a set of coordinates

Parameters: lons (array-like) – ‘Longitudes’ (RA or GLON) lats (array-like) – ‘Latitidues’ (DEC or GLAT) vals (numpy.ndarray((n))) Values of pixels in the flattened map, np.nan used to flag coords outside of map
get_pixel_indices(lons, lats)[source]

Return the indices in the flat array corresponding to a set of coordinates

Parameters: lons (array-like) – ‘Longitudes’ (RA or GLON) lats (array-like) – ‘Latitidues’ (DEC or GLAT) idxs (numpy.ndarray((n),’i’)) Indices of pixels in the flattened map, -1 used to flag coords outside of map
get_pixel_skydirs()[source]
ipix_swap_axes(ipix, colwise=False)[source]

Return the transposed pixel index from the pixel xy coordinates

if colwise is True (False) this assumes the original index was in column wise scheme

ipix_to_xypix(ipix, colwise=False)[source]

Return the pixel xy coordinates from the pixel index

if colwise is True (False) this uses columnwise (rowwise) indexing

pix_center

Return the ROI center in pixel coordinates.

pix_size

Return the pixel size along the two image dimensions.

skydir

Return the sky coordinate of the image center.

sum_over_energy()[source]

Reduce a 3D counts cube to a 2D counts map

wcs
width

Return the dimensions of the image.

xy_pix_to_ipix(xypix, colwise=False)[source]

Return the pixel index from the pixel xy coordinates

if colwise is True (False) this uses columnwise (rowwise) indexing

class fermipy.skymap.Map_Base(counts)[source]

Bases: object

Abstract representation of a 2D or 3D counts map.

counts
get_pixel_indices(lats, lons)[source]
fermipy.skymap.make_coadd_map(maps, proj, shape)[source]
fermipy.skymap.make_coadd_wcs(maps, wcs, shape)[source]
fermipy.skymap.read_map_from_fits(fitsfile, extname=None)[source]

## fermipy.tsmap module¶

class fermipy.tsmap.TSCubeGenerator[source]

Bases: object

tscube(prefix=u'', **kwargs)[source]

Generate a spatial TS map for a source component with properties defined by the model argument. This method uses the gttscube ST application for source fitting and will simultaneously fit the test source normalization as well as the normalizations of any background components that are currently free. The output of this method is a dictionary containing Map objects with the TS and amplitude of the best-fit test source. By default this method will also save maps to FITS files and render them as image files.

Parameters: prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). model (dict) – Dictionary defining the properties of the test source. do_sed (bool) – Compute the energy bin-by-bin fits. nnorm (int) – Number of points in the likelihood v. normalization scan. norm_sigma (float) – Number of sigma to use for the scan range. tol (float) – Critetia for fit convergence (estimated vertical distance to min < tol ). tol_type (int) – Absoulte (0) or relative (1) criteria for convergence. max_iter (int) – Maximum number of iterations for the Newton’s method fitter remake_test_source (bool) – If true, recomputes the test source image (otherwise just shifts it) st_scan_level (int) – make_plots (bool) – Write image files. write_fits (bool) – Write a FITS file with the results of the analysis. maps – A dictionary containing the Map objects for TS and source amplitude. dict
class fermipy.tsmap.TSMapGenerator[source]

Bases: object

Mixin class for GTAnalysis that generates TS maps.

tsmap(prefix=u'', **kwargs)[source]

Generate a spatial TS map for a source component with properties defined by the model argument. The TS map will have the same geometry as the ROI. The output of this method is a dictionary containing Map objects with the TS and amplitude of the best-fit test source. By default this method will also save maps to FITS files and render them as image files.

This method uses a simplified likelihood fitting implementation that only fits for the normalization of the test source. Before running this method it is recommended to first optimize the ROI model (e.g. by running optimize()).

Parameters: prefix (str) – Optional string that will be prepended to all output files (FITS and rendered images). model (dict) – Dictionary defining the properties of the test source. exclude (str or list of str) – Source or sources that will be removed from the model when computing the TS map. erange (list) – Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied. max_kernel_radius (float) – Set the maximum radius of the test source kernel. Using a smaller value will speed up the TS calculation at the loss of accuracy. The default value is 3 degrees. make_plots (bool) – Write image files. write_fits (bool) – Write a FITS file. write_npy (bool) – Write a numpy file. maps – A dictionary containing the Map objects for TS and source amplitude. dict
fermipy.tsmap.cash(counts, model)[source]

Compute the Poisson log-likelihood function.

fermipy.tsmap.convert_tscube(infile, outfile)[source]

Convert between old and new TSCube formats.

fermipy.tsmap.extract_array(array_large, array_small, position)[source]
fermipy.tsmap.extract_images_from_tscube(infile, outfile)[source]

Extract data from table HDUs in TSCube file and convert them to FITS images

fermipy.tsmap.extract_large_array(array_large, array_small, position)[source]
fermipy.tsmap.extract_small_array(array_small, array_large, position)[source]
fermipy.tsmap.f_cash(x, counts, background, model)[source]

Wrapper for cash statistics, that defines the model function.

Parameters: x (float) – Model amplitude. counts (ndarray) – Count map slice, where model is defined. background (ndarray) – Background map slice, where model is defined. model (ndarray) – Source template (multiplied with exposure).
fermipy.tsmap.f_cash_sum(x, counts, background, model)[source]
fermipy.tsmap.overlap_slices(large_array_shape, small_array_shape, position)[source]

Modified version of overlap_slices.

Get slices for the overlapping part of a small and a large array.

Given a certain position of the center of the small array, with respect to the large array, tuples of slices are returned which can be used to extract, add or subtract the small array at the given position. This function takes care of the correct behavior at the boundaries, where the small array is cut of appropriately.

Parameters: large_array_shape (tuple) – Shape of the large array. small_array_shape (tuple) – Shape of the small array. position (tuple) – Position of the small array’s center, with respect to the large array. Coordinates should be in the same order as the array shape. slices_large (tuple of slices) – Slices in all directions for the large array, such that large_array[slices_large] extracts the region of the large array that overlaps with the small array. slices_small (slice) – Slices in all directions for the small array, such that small_array[slices_small] extracts the region that is inside the large array.
fermipy.tsmap.poisson_log_like(counts, model)[source]

Compute the Poisson log-likelihood function for the given counts and model arrays.

fermipy.tsmap.sum_arrays(x)[source]
fermipy.tsmap.truncate_array(array1, array2, position)[source]

Truncate array1 by finding the overlap with array2 when the array1 center is located at the given position in array2.

## fermipy.residmap module¶

class fermipy.residmap.ResidMapGenerator[source]

Bases: object

Mixin class for GTAnalysis that generates spatial residual maps from the difference of data and model maps smoothed with a user-defined spatial/spectral template. The map of residual significance can be interpreted in the same way as a TS map (the likelihood of a source at the given location).

residmap(prefix=u'', **kwargs)[source]

Generate 2-D spatial residual maps using the current ROI model and the convolution kernel defined with the model argument.

Parameters: prefix (str) – String that will be prefixed to the output residual map files. model (dict) – Dictionary defining the properties of the convolution kernel. exclude (str or list of str) – Source or sources that will be removed from the model when computing the residual map. erange (list) – Restrict the analysis to an energy range (emin,emax) in log10(E/MeV) that is a subset of the analysis energy range. By default the full analysis energy range will be used. If either emin/emax are None then only an upper/lower bound on the energy range wil be applied. make_plots (bool) – Write image files. write_fits (bool) – Write FITS files. maps – A dictionary containing the Map objects for the residual significance and amplitude. dict
fermipy.residmap.convolve_map(m, k, cpix, threshold=0.001, imin=0, imax=None)[source]

Perform an energy-dependent convolution on a sequence of 2-D spatial maps.

Parameters: m (ndarray) – 3-D map containing a sequence of 2-D spatial maps. First dimension should be energy. k (ndarray) – 3-D map containing a sequence of convolution kernels (PSF) for each slice in m. This map should have the same dimension as m. cpix (list) – Indices of kernel reference pixel in the two spatial dimensions. threshold (float) – Kernel amplitude imin (int) – Minimum index in energy dimension. imax (int) – Maximum index in energy dimension.
fermipy.residmap.get_source_kernel(gta, name, kernel=None)[source]

Get the PDF for the given source.

fermipy.residmap.poisson_lnl(nc, mu)[source]