Quickstart Guide

This page walks through the steps to setup and perform a basic spectral analysis of a source. For additional fermipy tutorials see the section on IPython Notebook Tutorials.

Creating a Configuration File

The first step is to compose a configuration file that defines the data selection and analysis parameters. Complete documentation on the configuration file and available options is given in the Configuration page. fermiPy uses the YAML format for its configuration files. The configuration file has a hierarchical structure that groups sets of related options into sections. The following example is a configuration file for a SOURCE-class analysis of Markarian 421 with FRONT+BACK event types (evtype=3):

  evfile : ft1.lst
  scfile : ft2.fits

  roiwidth   : 10.0
  binsz      : 0.1
  binsperdec : 8

selection :
  emin : 100
  emax : 100000
  zmax    : 90
  evclass : 128
  evtype  : 3
  target : 'mkn421'
  tmin    : 239557414
  tmax    : 428903014
  filter  : null

  edisp : True
  irfs : 'P8R2_SOURCE_V6'
  edisp_disable : ['isodiff','galdiff']

  src_roiwidth : 15.0
  galdiff  : '$FERMI_DIFFUSE_DIR/gll_iem_v06.fits'
  isodiff  : 'iso_P8R2_SOURCE_V6_v06.txt'
  catalogs : ['3FGL']

The data section defines the input data set and spacecraft file for the analysis. Here evfile points to a list of FT1 files that encompass the chosen ROI, energy range, and time selection. The parameters in the binning section define the dimensions of the ROI and the spatial and energy bin size. The selection section defines parameters related to the data selection (energy range, zmax cut, and event class/type). The target parameter in this section defines the ROI center to have the same coordinates as the given source. The model section defines parameters related to the ROI model definition (diffuse templates, point sources).

fermiPy gives the user the option to combine multiple data selections into a joint likelihood with the components section. The components section contains a list of dictionaries with the same hierarchy as the root analysis configuration. Each element of the list defines the analysis parameters for an independent sub-selection of the data. Any parameters not defined within the component dictionary default to the value defined in the root configuration. The following example shows the components section that could be appended to the previous configuration to define a joint analysis with four PSF event types:

  - { selection : { evtype : 4  } } # PSF0
  - { selection : { evtype : 8  } } # PSF1
  - { selection : { evtype : 16 } } # PSF2
  - { selection : { evtype : 32 } } # PSF3

Any configuration parameter can be changed with this mechanism. The following example is a configuration in which a different zmax selection and isotropic template is used for each of the four PSF event types:

  - model: {isodiff: isotropic_source_psf0_4years_P8V3.txt}
    selection: {evtype: 4, zmax: 70}
  - model: {isodiff: isotropic_source_psf1_4years_P8V3.txt}
    selection: {evtype: 8, zmax: 75}
  - model: {isodiff: isotropic_source_psf2_4years_P8V3.txt}
    selection: {evtype: 16, zmax: 85}
  - model: {isodiff: isotropic_source_psf3_4years_P8V3.txt}
    selection: {evtype: 32, zmax: 90}

Creating an Analysis Script

Once the configuration file has been composed, the analysis is executed by creating an instance of GTAnalysis with the configuration file as its argument and calling its analysis methods. GTAnalysis serves as a wrapper over the underlying pyLikelihood classes and provides methods to fix/free parameters, add/remove sources from the model, and perform a fit to the ROI. For a complete documentation of the available methods you can refer to the fermipy package page.

In the following python examples we show how to initialize and run a basic analysis of a source. First we instantiate a GTAnalysis object with the path to the configuration file and run setup().

from fermipy.gtanalysis import GTAnalysis

gta = GTAnalysis('config.yaml',logging={'verbosity' : 3})

The setup() method performs the data preparation and response calculations needed for the analysis (selecting the data, creating counts and exposure maps, etc.). Depending on the data selection and binning of the analysis this will often be the slowest step in the analysis sequence. The output of setup() is cached in the analysis working directory so subsequent calls to setup() will run much faster.

Before running any other analysis methods it is recommended to first run optimize():


This will loop over all model components in the ROI and fit their normalization and spectral shape parameters. This method also computes the TS of all sources which can be useful for identifying weak sources that could be fixed or removed from the model. We can check the results of the optimization step by calling print_roi():


By default all models parameters are initially fixed. The free_source() and free_sources() methods can be use to free or fix parameters of the model. In the following example we free the normalization of catalog sources within 3 deg of the ROI center and free the galactic and isotropic components by name.

# Free Normalization of all Sources within 3 deg of ROI center

# Free all parameters of isotropic and galactic diffuse components

The minmax_ts and minmax_npred arguments to free_sources() can be used to free or fixed sources on the basis of their current TS or Npred values:

# Free sources with TS > 10

# Fix sources with TS < 10

# Fix sources with 10 < Npred < 100

When passing a source name argument both case and whitespace are ignored. When using a FITS catalog file a source can also be referred to by any of its associations. When using the 3FGL catalog, the following calls are equivalent ways of freeing the parameters of Mkn 421:

# These calls are equivalent
gta.free_source('Mkn 421')
gta.free_source('3FGL J1104.4+3812')

After freeing parameters of the model we can execute a fit by calling fit(). The will maximize the likelihood with respect to the model parameters that are currently free.


After the fitting is complete we can write the current state of the model with write_roi:


This will write several output files including an XML model file and an ROI dictionary file. The names of all output files will be prepended with the prefix argument to write_roi().

Once we have optimized our model for the ROI we can use the residmap() and tsmap() methods to assess the fit quality and look for new sources.

# Dictionary defining the spatial/spectral template
model = {'SpatialModel' : 'PointSource', 'Index' : 2.0,
         'SpectrumType' : 'PowerLaw'}

# Both methods return a dictionary with the maps
m0 = gta.residmap('fit_model',model=model)
m1 = gta.tsmap('fit_model',model=model)

More documentation about these methods is available in the Source Detection page.

By default, calls to fit() will execute a global spectral fit over the entire energy range of the analysis. To extract a bin-by-bin flux spectrum (i.e. a SED) you can call sed() method with the name of the source:


More information about sed() method can be found in the SED Analysis page.

Extracting Analysis Results

Results of the analysis can be extracted from the dictionary file written by write_roi(). This method writes the current ROI model to both an XML model file and a results dictionary. More documentation on the contents of the output file are available in the Output File page.

The results dictionary is written in both npy and yaml formats and can be loaded from a python session after your analysis is complete. The following example demonstrates how to load the dictionary from either format:

>>> # Load from yaml
>>> import yaml
>>> c = yaml.load(open('fit_model.yaml'))
>>> # Load from npy
>>> import np
>>> c = np.load('fit_model.npy').flat[0]
>>> print c.keys()
['roi', 'config', 'sources', 'version']

The output dictionary contains the following top-level elements:

File Dictionary
Key Description  
roi dict A dictionary containing information about the ROI as a whole.
sources dict A dictionary containing information for individual sources in the model (diffuse and point-like). Each element of this dictionary maps to a single source in the ROI model.
config dict The configuration dictionary of the GTAnalysis instance.
version dict The version of the fermiPy package that was used to run the analysis. This is automatically generated from the git release tag.

Each source dictionary collects the properties of the given source (TS, NPred, best-fit parameters, etc.) computed up to that point in the analysis.

>>> print c['sources'].keys()
['3FGL J0954.2+4913',
 '3FGL J0957.4+4728',
 '3FGL J1006.7+3453',


 '3FGL J1153.4+4932',
 '3FGL J1159.5+2914',
 '3FGL J1203.2+3847',
 '3FGL J1209.4+4119',

Reloading from a Previous State

One can reload an analysis instance that was saved with write_roi() by calling either the create() or load_roi() methods. The create() method can be used to construct an entirely new instance of GTAnalysis from a previously saved results file:

from fermipy.gtanalysis import GTAnalysis
gta = GTAnalysis.create('fit_model.npy')

# Continue running analysis starting from the previously saved
# state

where the argument is the path to an output file produced with write_roi(). This function will instantiate a new analysis object, run the setup() method, and load the state of the model parameters at the time that write_roi() was called.

The load_roi() method can be used to reload a previous state of the analysis to an existing instance of GTAnalysis.

from fermipy.gtanalysis import GTAnalysis

gta = GTAnalysis('config.yaml')


# Fit a source

# Restore the analysis to its prior state before the fit of mkn421
# was executed

Using load_roi() is generally faster than create() when an analysis instance already exists.

IPython Notebook Tutorials

Additional tutorials with more detailed examples are available as IPython notebooks in the notebooks directory of the fermipy-extra respository. These notebooks can be browsed as static web pages or run interactively by downloading the fermipy-extra repository and running jupyter notebook in the notebooks directory:

$ git clone https://github.com/fermiPy/fermipy-extra.git
$ cd fermipy-extra/notebooks
$ jupyter notebook index.ipynb

Note that this will require you to have both ipython and jupyter installed in your python environment. These can be installed in a conda- or pip-based installation as follows:

# Install with conda
$ conda install ipython jupyter

# Install with pip
$ pip install ipython jupyter