astropy.io.fits.writeto Python Example Handling FITS files
Thursday, December 13, 2018

astropy.io.fits.writeto Python Example Handling FITS files

Skip to content

Sign in
Sign up


  • Watch

    13


  • Star

    66


  • Fork

    40


/ fitsio

Join GitHub today

GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.

Sign up


A python package for FITS input/output wrapping cfitsio

python

cfitsio

astrophysics

astronomy



  • 710

    commits



  • 25

    branches



  • 19

    releases



  • 15

    contributors


  • GPL-2.0


  1. C
    75.5%


  2. TeX
    13.5%


  3. Python
    3.5%


  4. Yacc
    2.5%


  5. C++
    2.3%


  6. Fortran
    1.9%



  7. Other
    0.8%


Switch branches/tags



aligned



bitcols



bundle



cfitsio_build_ext



cfitsio3370



cfitsio3420



continuekey



emptyhdu



feature-ignore-empty



feature-row-modifications



groupstart



gzip_2



imfromdims



master



memfile



nullstr



passname



pliofix



py3sanitize



py3



python3



refactor1



revert-140-master



unicode



#167

Nothing to show

Find file

Clone or download

Clone with HTTPS

Use Git or checkout with SVN using the web URL.



Open in Desktop

Download ZIP

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching GitHub Desktop

If nothing happens, download GitHub Desktop and try again.

Launching Xcode

If nothing happens, download Xcode and try again.

Launching Visual Studio

If nothing happens, download the GitHub extension for Visual Studio and try again.

@beckermr

beckermr

ENH package the C code too ( #178 )

Latest commit

38df6f0

Nov 29, 2018

Permalink

TypeNameLatest commit messageCommit time
Failed to load latest commit information.

cfitsio3430patch
Restore old behavior of unsigned short conversion for HCOMPRESS images.
Mar 20, 2018

fitsio
add test with None header keyword
Oct 5, 2018

.gitignore
ENH package the C code too ( #178 )
Nov 29, 2018

.travis.yml
–install-prefix needs –single-version-externally-managed
Oct 4, 2018

CHANGES.md
Merge branch ‘feature-ignore-empty’
Mar 21, 2018

LICENSE.txt
follow setuptools package guidelines.
Oct 4, 2018

MANIFEST.in
ENH package the C code too ( #178 )
Nov 29, 2018

README.md
DOC add conda-forge instructions
Oct 4, 2018

setup.py
ENH package the C code too ( #178 )
Nov 29, 2018


README.md

A python library to read from and write to FITS files.

Build Status (master)

Do not use numpy 1.10.0 or 1.10.1

There is a serious performance regression in numpy 1.10 that results
in fitsio running tens to hundreds of times slower. A fix may be
forthcoming in a later release. Please comment here if this
has already impacted your work https://github.com/numpy/numpy/issues/6467

Description

This is a python extension written in c and python. Data are read into
numerical python arrays.

A version of cfitsio is bundled with this package, there is no need to install
your own, nor will this conflict with a version you have installed.

Some Features

  • Read from and write to image, binary, and ascii table extensions.
  • Read arbitrary subsets of table columns and rows without loading all the data
    to memory.
  • Read image subsets without reading the whole image. Write subsets to existing images.
  • Write and read variable length table columns.
  • Read images and tables using slice notation similar to numpy arrays. This is like a more
    powerful memmap, since it is column-aware for tables.
  • Append rows to an existing table. Delete row sets and row ranges. Resize tables,
    or insert rows.
  • Query the columns and rows in a table.
  • Read and write header keywords.
  • Read and write images in tile-compressed format (RICE,GZIP,PLIO,HCOMPRESS).
  • Read/write gzip files directly. Read unix compress (.Z,.zip) and bzip2 (.bz2) files.
  • TDIM information is used to return array columns in the correct shape.
  • Write and read string table columns, including array columns of arbitrary
    shape.
  • Read and write complex, bool (logical), unsigned integer, signed bytes types.
  • Write checksums into the header and verify them.
  • Insert new columns into tables in-place.
  • Iterate over rows in a table. Data are buffered for efficiency.
  • python 3 support

Examples

import fitsiofrom fitsio import FITS,FITSHDR# Often you just want to quickly read or write data without bothering to# create a FITS object. In that case, you can use the read and write# convienience functions.# read all data from the first hdu with datafilename='data.fits'data = fitsio.read(filename)# read a subset of rows and columns from a tabledata = fitsio.read(filename, rows=[35,1001], columns=['x','y'], ext=2)# read the header, or both at onceh = fitsio.read_header(filename, extension)
data,h = fitsio.read(filename, ext=ext, header=True)# open the file, write a new binary table extension, and then write the# data from "recarray" into the table. By default a new extension is# added to the file. use clobber=True to overwrite an existing file# instead. To append rows to an existing table, see below.fitsio.write(filename, recarray)# write an imagefitsio.write(filename, image)# NOTE when reading row subsets, the data must still be read from disk.# This is most efficient if the data are read in the order they appear in# the file. For this reason, the rows are always returned in row-sorted# order.## the FITS class gives the you the ability to explore the data, and gives# more control## open a FITS file for reading and explorefits=fitsio.FITS('data.fits')# see what is in here; the FITS object prints itselfprint(fits)file: data.fits
mode: READONLYextnum hdutype hduname0 IMAGE_HDU1 BINARY_TBL mytable# at the python prompt, you could just type "fits" and it will automatically# print itself. Same for ipython.>>> fitsfile: data.fits... etc# explore the extensions, either by extension number or# extension name if availableprint(fits[0])file: data.fits
extension: 0type: IMAGE_HDUimage info: data type: f8 dims: [4096,2048]print(fits['mytable'] # can also use fits[1])file: data.fits
extension: 1type: BINARY_TBLextname: mytable
rows: 4328342column info: i1scalar u1 f f4 fvec f4 array[2] darr f8 array[3,2] dvarr f8 varray[10] s S5 svec S6 array[3] svar S0 vstring[8] sarr S2 array[4,3]# See bottom for how to get more information for an extension# [-1] to refers the last HDUprint(fits[-1])...# if there are multiple HDUs with the same name, and an EXTVER# is set, you can use it. Here extver=2# fits['mytable',2]# read the image from extension zeroimg = fits[0].read()img = fits[0][:,:]# read a subset of the image without reading the whole imageimg = fits[0][25:35, 45:55]# read all rows and columns from a binary table extensiondata = fits[1].read()data = fits['mytable'].read()data = fits[1][:]# read a subset of rows and columns. By default uses a case-insensitive# match. The result retains the names with original case. If columns is a# sequence, a recarray is returneddata = fits[1].read(rows=[1,5], columns=['index','x','y'])# Similar but using slice notation# row subsetsdata = fits[1][10:20]data = fits[1][10:20:2]data = fits[1][[1,5,18]]# all rows of column 'x'data = fits[1]['x'][:]# Read a few columns at once. This is more efficient than separate read for# each columndata = fits[1]['x','y'][:]# General column and row subsets. As noted above, the data are returned# in row sorted order for efficiency reasons.columns=['index','x','y']rows=[1,5]data = fits[1][columns][rows]# iterate over rows in a table hdu# faster if we buffer some rows, let's buffer 1000 at a timefits=fitsio.FITS(filename,iter_row_buffer=1000)for row in fits[1]: print(row)# iterate over HDUs in a FITS objectfor hdu in fits: data=hdu.read()# Note dvarr shows type varray[10] and svar shows type vstring[8]. These# are variable length columns and the number specified is the maximum size.# By default they are read into fixed-length fields in the output array.# You can over-ride this by constructing the FITS object with the vstorage# keyword or specifying vstorage when reading. Sending vstorage='object'# will store the data in variable size object fields to save memory; the# default is vstorage='fixed'. Object fields can also be written out to a# new FITS file as variable length to save disk space.fits = fitsio.FITS(filename,vstorage='object')# ORdata = fits[1].read(vstorage='object')print(data['dvarr'].dtype) dtype('object')# you can grab a FITS HDU object to simplify notationhdu1 = fits[1]data = hdu1['x','y'][35:50]# get rows that satisfy the input expression. See "Row Filtering# Specification" in the cfitsio manual (note no temporary table is# created in this case, contrary to the cfitsio docs)w=fits[1].where("x > 0.25 && y < 35.0")data = fits[1][w]# read the headerh = fits[0].read_header()print(h['BITPIX']) -64fits.close()# now write some datafits = FITS('test.fits','rw')# create a rec array. Note vstr# is a variable length stringnrows=35data = numpy.zeros(nrows, dtype=[('index','i4'),('vstr','O'),('x','f8'), ('arr','f4',(3,4))])
data['index'] = numpy.arange(nrows,dtype='i4')
data['x'] = numpy.random.random(nrows)
data['vstr'] = [str(i) for i in xrange(nrows)]
data['arr'] = numpy.arange(nrows*3*4,dtype='f4').reshape(nrows,3,4)# create a new table extension and write the datafits.write(data)# can also be a list of ordinary arrays if you send the namesarray_list=[xarray,yarray,namearray]names=['x','y','name']
fits.write(array_list, names=names)# similarly a dict of arraysfits.write(dict_of_arrays)
fits.write(dict_of_arrays, names=names) # control name order# append more rows to the table. The fields in data2 should match columns# in the table. missing columns will be filled with zerosfits[-1].append(data2)# insert a new column into a tablefits[-1].insert_column('newcol', data)# insert with a specific colnumfits[-1].insert_column('newcol', data, colnum=2)# overwrite rowsfits[-1].write(data)# overwrite starting at a particular row. The table will grow if neededfits[-1].write(data, firstrow=350)# create an imageimg=numpy.arange(2*3,dtype='i4').reshape(2,3)# write an image in a new HDU (if this is a new file, the primary HDU)fits.write(img)# write an image with rice compressionfits.write(img, compress='rice')# overwrite the imagefits[ext].write(img2)# write into an existing image, starting at the location [300,400]# the image will be expanded if neededfits[ext].write(img3, start=[300,400])# change the shape of the image on diskfits[ext].reshape([250,100])# add checksums for the datafits[-1].write_checksum()# can later verify data integridyfits[-1].verify_checksum()# you can also write a header at the same time. The header can be # - a simple dict (no comments)# - a list of dicts with 'name','value','comment' fields# - a FITSHDR objecthdict = 'somekey': 35, 'location': 'kitt peak'fits.write(data, header=hdict)hlist = ['name':'observer', 'value':'ES', 'comment':'who', 'name':'location','value':'CTIO', 'name':'photometric','value':True]
fits.write(data, header=hlist)hdr=FITSHDR(hlist)
fits.write(data, header=hdr)# you can add individual keys to an existing HDUfits[1].write_key(name, value, comment="my comment")# Write multiple header keys to an existing HDU. Here records # is the same as sent with header= abovefits[1].write_keys(records)# write special COMMENT fieldsfits[1].write_comment("observer JS")
fits[1].write_comment("we had good weather")# write special history fieldsfits[1].write_history("processed with software X")
fits[1].write_history("re-processed with software Y")
fits.close()# using a context, the file is closed automatically after leaving the blockwith FITS('path/to/file') as fits: data = fits[ext].read() # you can check if a header exists using "in": if 'blah' in fits: data=fits['blah'].read() if 2 in f: data=fits[2].read()# methods to get more information about extension. For extension 1:f[1].get_info() # lots of info about the extensionf[1].has_data() # returns True if data is present in extensionf[1].get_extname()
f[1].get_extver()
f[1].get_extnum() # return zero-offset extension numberf[1].get_exttype() # 'BINARY_TBL' or 'ASCII_TBL' or 'IMAGE_HDU'f[1].get_offsets() # byte offsets (header_start, data_start, data_end)f[1].is_compressed() # for images. True if tile-compressedf[1].get_colnames() # for tablesf[1].get_colname(colnum) # for tables find the name from column numberf[1].get_nrows() # for tablesf[1].get_rec_dtype() # for tablesf[1].get_rec_column_descr() # for tablesf[1].get_vstorage() # for tables, storage mechanism for variable  # length columns# public attributes you can feel free to change as neededf[1].lower # If True, lower case colnames on outputf[1].upper # If True, upper case colnames on outputf[1].case_sensitive # if True, names are matched case sensitive

Installation

The easiest way is using pip or conda. To get the latest release

pip install fitsio
# update fitsio (and everything else)
pip install fitsio --upgrade
# if pip refuses to update to a newer version
pip install fitsio --upgrade --ignore-installed
# if you only want to upgrade fitsio
pip install fitsio --no-deps --upgrade --ignore-installed
# for conda, use conda-forge
conda install -c conda-forge fitsio

You can also get the latest source tarball release from

https://pypi.python.org/pypi/fitsio

or the bleeding edge source from github or use git. To check out
the code for the first time

git clone https://github.com/esheldon/fitsio.git

Or at a later time to update to the latest

cd fitsio
git update

Use tar xvfz to untar the file, enter the fitsio directory and type

python setup.py install

optionally with a prefix

python setup.py install --prefix=/some/path

Requirements

- python 2 or python 3
- you need a c compiler and build tools like Make
- You need numerical python (numpy).

Tests

The unit tests should all pass for full support.

import fitsio
fitsio.test.test()

Some tests may fail if certain libraries are not available, such
as bzip2. This failure only implies that bzipped files cannot
be read, without affecting other functionality.

TODO

  • HDU groups: does anyone use these? If so open an issue!

Notes on cfitsio bundling

We bundle partly because many deployed versions of cfitsio in the wild do not
have support for interesting features like tiled image compression. Bundling
a version that meets our needs is a safe alternative.

Note on array ordering

Since numpy uses C order, FITS uses fortran order, we have to write the TDIM
and image dimensions in reverse order, but write the data as is. Then we need
to also reverse the dims as read from the header when creating the numpy dtype,
but read as is.



You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.

Press h to open a hovercard with more details.

Home
  
Popular Modules


Log in
     
Sign up
(free)

report this ad

Related Functions
  • time.time()

  • logging.getLogger()

  • glob.glob()

  • numpy.array()

  • numpy.zeros()

  • numpy.arange()

  • numpy.ones()

  • numpy.sqrt()

  • numpy.exp()

  • numpy.sum()

  • numpy.pi()

  • numpy.mean()

  • numpy.abs()

  • numpy.cos()

  • numpy.sin()

  • numpy.linspace()

  • numpy.where()

  • numpy.nan()

  • numpy.median()

  • astropy.io.fits.open()

report this ad

Related Modules
  • os

  • sys

  • re

  • time

  • logging

  • math

  • subprocess

  • shutil

  • copy

  • warnings

  • glob

  • numpy

  • collections

  • argparse

  • matplotlib.pyplot

Others in astropy.io.fits
  • .writeto()

  • .PrimaryHDU()

  • .ImageHDU()

  • .ColDefs()

  • .getheader()

  • .Header()

  • .Column()

  • .HDUList()

  • .getdata()

  • .open()

report this ad


Python astropy.io.fits.writeto() Examples

The following are 11
code examples for showing how to use astropy.io.fits.writeto(). They are
extracted from open source Python projects. You can vote up the examples you like or vote down the exmaples you don’t like. You can also save this page to your account.

+ Save to library
Example 1
Project: integration-prototype  
Author: SKA-ScienceDataProcessor  
File: example_mpi_imager.py
  

(license)

View Source Project

6
votes

vote down


vote up
def save_image(imager, grid_data, grid_norm, output_file): """Makes an image from gridded visibilities and saves it to a FITS file. Args: imager (oskar.Imager): Handle to configured imager. grid_data (numpy.ndarray): Final visibility grid. grid_norm (float): Grid normalisation to apply. output_file (str): Name of output FITS file to write. """ # Make the image (take the FFT, normalise, and apply grid correction). imager.finalise_plane(grid_data, grid_norm) grid_data = numpy.real(grid_data) # Trim the image if required. border = (imager.plane_size - imager.image_size) // 2 if border > 0: end = border + imager.image_size grid_data = grid_data[border:end, border:end] # Write the FITS file. hdr = fits.header.Header() fits.writeto(output_file, grid_data, hdr, clobber=True) 

Example 2
Project: integration-prototype  
Author: SKA-ScienceDataProcessor  
File: example_spark_imager.py
  

(license)

View Source Project

6
votes

vote down


vote up
def save_image(imager, grid_data, grid_norm, output_file): """Makes an image from gridded visibilities and saves it to a FITS file. Args: imager (oskar.Imager): Handle to configured imager. grid_data (numpy.ndarray): Final visibility grid. grid_norm (float): Grid normalisation to apply. output_file (str): Name of output FITS file to write. """ # Make the image (take the FFT, normalise, and apply grid correction). imager.finalise_plane(grid_data, grid_norm) grid_data = numpy.real(grid_data) # Trim the image if required. border = (imager.plane_size - imager.image_size) // 2 if border > 0: end = border + imager.image_size grid_data = grid_data[border:end, border:end] # Write the FITS file. hdr = fits.header.Header() fits.writeto(output_file, grid_data, hdr, clobber=True) 

Example 3
Project: algorithm-reference-library  
Author: SKA-ScienceDataProcessor  
File: operations.py
  

(license)

View Source Project

5
votes

vote down


vote up
def export_image_to_fits(im: Image, fitsfile: str = 'imaging.fits'): """ Write an image to fits :param im: Image :param fitsfile: Name of output fits file """ assert isinstance(im, Image), im return fits.writeto(filename=fitsfile, data=im.data, header=im.wcs.to_header(), overwrite=True) 

Example 4
Project: ZOGY  
Author: pmvreeswijk  
File: zogy.py
  

(license)

View Source Project

5
votes

vote down


vote up
def ds9_arrays(**kwargs): cmd = ['ds9', '-zscale', '-zoom', '4', '-cmap', 'heat'] for name, array in kwargs.items(): # write array to fits fitsfile = 'ds9_'+name+'.fits' fits.writeto(fitsfile, np.array(array).astype(np.float32), clobber=True) # append to command cmd.append(fitsfile) #print 'cmd', cmd result = subprocess.call(cmd)
################################################################################ 
Example 5
Project: decode  
Author: deshima-dev  
File: classes.py
  

(license)

View Source Project

4
votes

vote down


vote up
def savefits(cube, fitsname, **kwargs): logger = getLogger('decode.io.savefits') ### pick up kwargs dropdeg = kwargs.pop('dropdeg', False) ndim = len(cube.dims) ### load yaml FITSINFO = get_data('decode', 'data/fitsinfo.yaml') hdrdata = yaml.load(FITSINFO) ### default header if ndim == 2: header = fits.Header(hdrdata['dcube_2d']) data = cube.values.T elif ndim == 3: if dropdeg: header = fits.Header(hdrdata['dcube_2d']) data = cube.values[:, :, 0].T else: header = fits.Header(hdrdata['dcube_3d']) data = cube.values.T else: raise TypeError(ndim) ### update Header if cube.coordsys == 'AZEL': header.update('CTYPE1': 'dAZ', 'CTYPE2': 'dEL') elif cube.coordsys == 'RADEC': header.update('OBSRA': float(cube.xref), 'OBSDEC': float(cube.yref)) else: pass header.update('CRVAL1': float(cube.x[0]), 'CDELT1': float(cube.x[1] - cube.x[0]), 'CRVAL2': float(cube.y[0]), 'CDELT2': float(cube.y[1] - cube.y[0]), 'DATE': datetime.now(timezone('UTC')).isoformat()) if (ndim == 3) and (not dropdeg): header.update('CRVAL3': float(cube.kidid[0])) fits.writeto(fitsname, data, header, **kwargs) logger.info('{} has been created.'.format(fitsname)) 
Example 6
Project: CAAPR  
Author: Stargrazer82301  
File: rgbimage.py
  

(license)

View Source Project

4
votes

vote down


vote up
def saveto(self, filepath, tiff16bit=False): filepath = os.path.expanduser(filepath) lowfile = filepath.lower() # 16-bit TIFF file if tiff16bit and lowfile.endswith((".tif",".tiff")): # tiff header as uint16 words lsNX,msNX = split16(self.shape[0]) lsNY,msNY = split16(self.shape[1]) lsBYTES,msBYTES = split16(self.shape[0]*self.shape[1]*6) header = ( \ 0x4949, 42, 8, 0, # 0: TIFF header (little endian) 12, # 8: number of directory entries # (directory entry: tag,type,count,0,value/offset x 2) 256, 4, 1, 0, lsNX, msNX, # 10: ImageWidth, 1 LONG 257, 4, 1, 0, lsNY, msNY, # 22: ImageLength, 1 LONG 258, 3, 3, 0, 158, 0, # 34: BitsPerSample, 3 SHORT (-> offset!) 259, 3, 1, 0, 1, 0, # 46: Compression, 1 SHORT 262, 3, 1, 0, 2, 0, # 58: PhotometricInterpretation, 1 SHORT 273, 4, 1, 0, 180, 0, # 70: StripOffsets, 1 LONG 277, 3, 1, 0, 3, 0, # 82: SamplesPerPixel, 1 SHORT 278, 4, 1, 0, lsNY, msNY, # 94: RowsPerStrip, 1 LONG 279, 4, 1, 0, lsBYTES, msBYTES, # 106: StripByteCounts, 1 LONG 282, 5, 1, 0, 164, 0, # 118: XResolution, 1 RATIONAL (-> offset!) 283, 5, 1, 0, 172, 0, # 130: YResolution, 1 RATIONAL (-> offset!) 296, 3, 1, 0, 2, 0, # 142: ResolutionUnit, 1 SHORT 0, 0, # 154: IFD list terminator 16, 16, 16, # 158: BitsPerSample value 72, 0, 1, 0, # 164: XResolution value 72, 0, 1, 0 ) # 172: YResolution value # 180: Image data out = open(filepath, 'w') out.write(np.array(header,dtype=np.uint16).tostring()) data = self.scaledpixelarray(0,65535.99) out.write(np.flipud(np.rollaxis(data,1)).astype(np.uint16).tostring()) out.close() # standard 8-bit image file elif lowfile.endswith((".bmp",".gif",".jpg",".jpeg",".png",".tif",".tiff",".pdf")): self.ensurepil(invalidate=False) if lowfile.endswith((".jpg",".jpeg")): self.dpil.save(filepath, "JPEG", quality=80) elif lowfile.endswith((".png")): self.dpil.save(filepath, "PNG") elif lowfile.endswith((".tif",".tiff")): self.dpil.save(filepath, "TIFF") elif lowfile.endswith((".pdf")): self.dpil.save(filepath, "PDF") # FITS file elif lowfile.endswith(".fits"): self.ensurearr(invalidate=False) data = np.dstack(( self.darr[:,:,2],self.darr[:,:,1],self.darr[:,:,0] )) if os.path.exists(filepath): os.remove(filepath) # avoid message produced by fits library pyfits.writeto(filepath, data.T, clobber=True) # unsupported type else: raise ValueError("Filepath argument has unsupported filename extension") ## This function plots the image to the current axes in the current matplotlib figure. The image is # resampled to fit the axes. If \em fill is True, the image is stretched to fit the axes in both directions # changing the image aspect ratio if needed. If \em fill is False (the default), the axes aspect ratio # is adjusted so that the image aspect ratio is preserved. 

Example 7
Project: CAAPR  
Author: Stargrazer82301  
File: rgbimage.py
  

(license)

View Source Project

4
votes

vote down


vote up
def saveto(self, filepath, tiff16bit=False): filepath = os.path.expanduser(filepath) lowfile = filepath.lower() # 16-bit TIFF file if tiff16bit and lowfile.endswith((".tif",".tiff")): # tiff header as uint16 words lsNX,msNX = split16(self.shape[0]) lsNY,msNY = split16(self.shape[1]) lsBYTES,msBYTES = split16(self.shape[0]*self.shape[1]*6) header = ( \ 0x4949, 42, 8, 0, # 0: TIFF header (little endian) 12, # 8: number of directory entries # (directory entry: tag,type,count,0,value/offset x 2) 256, 4, 1, 0, lsNX, msNX, # 10: ImageWidth, 1 LONG 257, 4, 1, 0, lsNY, msNY, # 22: ImageLength, 1 LONG 258, 3, 3, 0, 158, 0, # 34: BitsPerSample, 3 SHORT (-> offset!) 259, 3, 1, 0, 1, 0, # 46: Compression, 1 SHORT 262, 3, 1, 0, 2, 0, # 58: PhotometricInterpretation, 1 SHORT 273, 4, 1, 0, 180, 0, # 70: StripOffsets, 1 LONG 277, 3, 1, 0, 3, 0, # 82: SamplesPerPixel, 1 SHORT 278, 4, 1, 0, lsNY, msNY, # 94: RowsPerStrip, 1 LONG 279, 4, 1, 0, lsBYTES, msBYTES, # 106: StripByteCounts, 1 LONG 282, 5, 1, 0, 164, 0, # 118: XResolution, 1 RATIONAL (-> offset!) 283, 5, 1, 0, 172, 0, # 130: YResolution, 1 RATIONAL (-> offset!) 296, 3, 1, 0, 2, 0, # 142: ResolutionUnit, 1 SHORT 0, 0, # 154: IFD list terminator 16, 16, 16, # 158: BitsPerSample value 72, 0, 1, 0, # 164: XResolution value 72, 0, 1, 0 ) # 172: YResolution value # 180: Image data out = open(filepath, 'w') out.write(np.array(header,dtype=np.uint16).tostring()) data = self.scaledpixelarray(0,65535.99) out.write(np.flipud(np.rollaxis(data,1)).astype(np.uint16).tostring()) out.close() # standard 8-bit image file elif lowfile.endswith((".bmp",".gif",".jpg",".jpeg",".png",".tif",".tiff",".pdf")): self.ensurepil(invalidate=False) if lowfile.endswith((".jpg",".jpeg")): self.dpil.save(filepath, "JPEG", quality=80) elif lowfile.endswith((".png")): self.dpil.save(filepath, "PNG") elif lowfile.endswith((".tif",".tiff")): self.dpil.save(filepath, "TIFF") elif lowfile.endswith((".pdf")): self.dpil.save(filepath, "PDF") # FITS file elif lowfile.endswith(".fits"): self.ensurearr(invalidate=False) data = np.dstack(( self.darr[:,:,2],self.darr[:,:,1],self.darr[:,:,0] )) if os.path.exists(filepath): os.remove(filepath) # avoid message produced by fits library pyfits.writeto(filepath, data.T, clobber=True) # unsupported type else: raise ValueError("Filepath argument has unsupported filename extension") ## This function plots the image to the current axes in the current matplotlib figure. The image is # resampled to fit the axes. If \em fill is True, the image is stretched to fit the axes in both directions # changing the image aspect ratio if needed. If \em fill is False (the default), the axes aspect ratio # is adjusted so that the image aspect ratio is preserved. 
Example 8
Project: pyphot  
Author: mfouesneau  
File: simpletable.py
  

(license)

View Source Project

4
votes

vote down


vote up
def _fits_writeto(filename, data, header=None, output_verify='exception', clobber=False, checksum=False): """ Create a new FITS file using the supplied data/header. Patched version of pyfits to correctly include provided header Parameters ---------- filename : file path, file object, or file like object File to write to. If opened, must be opened in a writeable binary mode such as 'wb' or 'ab+'. data : array, record array, or groups data object data to write to the new file header : `Header` object, optional the header associated with ``data``. If `None`, a header of the appropriate type is created for the supplied data. This argument is optional. output_verify : str Output verification option. Must be one of ``"fix"``, ``"silentfix"``, ``"ignore"``, ``"warn"``, or ``"exception"``. May also be any combination of ``"fix"`` or ``"silentfix"`` with ``"+ignore"``, ``+warn``, or ``+exception" (e.g. ``"fix+warn"``). See :ref:`verify` for more info. clobber : bool, optional If `True`, and if filename already exists, it will overwrite the file. Default is `False`. checksum : bool, optional If `True`, adds both ``DATASUM`` and ``CHECKSUM`` cards to the headers of all HDU's written to the file """ hdu = pyfits.convenience._makehdu(data, header) hdu.header.update(header.cards) if hdu.is_image and not isinstance(hdu, pyfits.PrimaryHDU): hdu = pyfits.PrimaryHDU(data, header=header) hdu.writeto(filename, clobber=clobber, output_verify=output_verify, checksum=checksum) 
Example 9
Project: pyphot  
Author: mfouesneau  
File: simpletable.py
  

(license)

View Source Project

4
votes

vote down


vote up
def write(self, fname, **kwargs): """ write table into file Parameters ---------- fname: str filename to export the table into .. note:: additional keywords are forwarded to the corresponding libraries :func:`pyfits.writeto` or :func:`pyfits.append` :func:`np.savetxt` """ extension = kwargs.pop('extension', None) if extension is None: extension = fname.split('.')[-1] if (extension == 'csv'): comments = kwargs.pop('comments', '#') delimiter = kwargs.pop('delimiter', ',') commentedHeader = kwargs.pop('commentedHeader', False) hdr = _ascii_generate_header(self, comments=comments, delimiter=delimiter, commentedHeader=commentedHeader) header = kwargs.pop('header', hdr) np.savetxt(fname, self.data, delimiter=delimiter, header=header, comments='', **kwargs) elif (extension in ['txt', 'dat']): comments = kwargs.pop('comments', '#') delimiter = kwargs.pop('delimiter', ' ') commentedHeader = kwargs.pop('commentedHeader', True) hdr = _ascii_generate_header(self, comments=comments, delimiter=delimiter, commentedHeader=commentedHeader) header = kwargs.pop('header', hdr) np.savetxt(fname, self.data, delimiter=delimiter, header=header, comments='', **kwargs) elif (extension == 'fits'): hdr0 = kwargs.pop('header', None) append = kwargs.pop('append', False) hdr = _fits_generate_header(self) if hdr0 is not None: hdr.update(**hdr0) if append: _fits_append(fname, self.data, hdr, **kwargs) else: # patched version to correctly include the header _fits_writeto(fname, self.data, hdr, **kwargs) elif (extension in ('hdf', 'hdf5', 'hd5')): _hdf5_write_data(fname, self.data, header=self.header, units=self._units, comments=self._desc, aliases=self._aliases, **kwargs) else: raise Exception('Format 0:s not handled'.format(extension)) 
Example 10
Project: goodman  
Author: soar-telescope  
File: core.py
  

(license)

View Source Project

4
votes

vote down


vote up
def remove_conflictive_keywords(path, file_list): """Removes problematic keywords The blue camera has a set of keywords whose comments contain non-ascii characters, in particular the degree symbol. Those keywords are not needed in any stage of the data reduction therefore they are removed. The data will be overwritten with the keywords removed. The user will need to have backups of raw data. Notes: This function solves a problem with old data, new headers are compliant with the headers. Args: path (str): Path to the folder containing the files file_list (list): List of files to remove keywords """ log_ccd.debug('Removing conflictive keywords in Blue Camera Headers') log_ccd.warning('Files will be overwritten') for blue_file in file_list: full_path = os.path.join(path, blue_file) log_ccd.debug('Processing file :s'.format(blue_file)) try: data, header = fits.getdata(full_path, header=True, ignore_missing_end=True) keys_to_remove = ['PARAM0', 'PARAM61', 'PARAM62', 'PARAM63', 'NAXIS3'] if data.ndim == 3: header['NAXIS'] = 2 data = data[0] log_ccd.debug('Modified file to be 2D instead of 3D ' '(problematic)') for keyword in keys_to_remove: header.remove(keyword) log_ccd.debug('Removed conflictive keyword ' ':s'.format(keyword)) log_ccd.debug('Updated headers') fits.writeto(full_path, data, header, clobber=True) except KeyError as error: log_ccd.debug(error)
# spectroscopy specific functions 
Example 11
Project: ZOGY  
Author: pmvreeswijk  
File: zogy.py
  

(license)

View Source Project

4
votes

vote down


vote up
def fits2ldac (header4ext2, data4ext3, fits_ldac_out, doSort=True): """This function converts the binary FITS table from Astrometry.net to a binary FITS_LDAC table that can be read by PSFex. [header4ext2] is what will be recorded as a single long string in the data part of the 2nd extension of the output table [fits_ldac_out], and [data4ext3] is the data part of an HDU that will define both the header and data parts of extension 3 of [fits_ldac_out]. """ # convert header to single (very) long string ext2_str = header4ext2.tostring(endcard=False, padding=False) # if the following line is not added, the very end of the data # part of extension 2 is written to a fits table such that PSFex # runs into a segmentation fault when attempting to read it (took # me ages to find out!). ext2_str += 'END END' # read into string array ext2_data = np.array([ext2_str]) # determine format string for header of extention 2 formatstr = str(len(ext2_str))+'A' # create table 1 col1 = fits.Column(name='Field Header Card', array=ext2_data, format=formatstr) cols = fits.ColDefs([col1]) ext2 = fits.BinTableHDU.from_columns(cols) # make sure these keywords are in the header ext2.header['EXTNAME'] = 'LDAC_IMHEAD' ext2.header['TDIM1'] = '(80, 0)'.format(len(ext2_str)/80) # simply create extension 3 from [data4ext3] ext3 = fits.BinTableHDU(data4ext3) # extname needs to be as follows ext3.header['EXTNAME'] = 'LDAC_OBJECTS' # sort output table by number column if needed if doSort: index_sort = np.argsort(ext3.data['NUMBER']) ext3.data = ext3.data[index_sort] # create primary HDU prihdr = fits.Header() prihdu = fits.PrimaryHDU(header=prihdr) prihdu.header['EXPTIME'] = header4ext2['EXPTIME'] prihdu.header['FILTNAME'] = header4ext2['FILTNAME'] # prihdu.header['SEEING'] = header4ext2['SEEING'] #need to calculte and add prihdu.header['BKGSIG'] = header4ext2['SEXBKDEV'] # write hdulist to output LDAC fits table hdulist = fits.HDUList([prihdu, ext2, ext3]) hdulist.writeto(fits_ldac_out, clobber=True) hdulist.close()
################################################################################ 
  • Terms of Use
  • Privacy
  • Support & Contact
Program Creek


Quantcast

python4astronomers

Navigation

Handling FITS files ¶

Note

If you are already familiar with PyFITS, astropy.io.fits is in
fact the same code as the latest version of PyFITS, and you can
adapt old scripts that use PyFITS to use Astropy by simply doing:

from astropy.io import fits as pyfits

However, for new scripts, we recommend the following import:

from astropy.io import fits

Documentation ¶

For more information about the features presented below, you can read the
astropy.io.fits docs.

Data ¶

The data used in this page (gll_iem_v02_P6_V11_DIFFUSE.fit) is an old
version of the LAT Background Model (Pass 6 V11 Diffuse front+back) which was
chosen so as not to have to download the larger more recent file.

Reading FITS files and accessing data ¶

Opening a FITS file is relatively straightforward. We can open the LAT
Background Model included in the tutorial files:

>>> from astropy.io import fits>>> hdulist = fits.open('gll_iem_v02_P6_V11_DIFFUSE.fit')

The returned object, hdulist, behaves like a Python list, and each element
maps to a Header-Data Unit (HDU) in the FITS file. You can view more
information about the FITS file with:

>>> hdulist.info()Filename: gll_iem_v02_P6_V11_DIFFUSE.fitNo. Name Type Cards Dimensions Format0 PRIMARY PrimaryHDU 34 (720, 360, 30) float321 ENERGIES BinTableHDU 19 30R x 1C [D]

As we can see, this file contains two HDUs. To access the primary HDU, which
contains the main data, you can then do:

>>> hdu = hdulist[0]

The hdu object then has two important attributes: data, which behaves
like a Numpy array, can be used to access the data, and header, which
behaves like a dictionary, can be used to access the header information.
First, we can take a look at the data:

>>> hdu.data.shape(30, 360, 720)

This tells us that it is a 3-d cube. We can now take a peak at the header

>>> hdu.headerSIMPLE = T / Written by IDL: Thu Jan 20 07:19:05 2011BITPIX = -32 /NAXIS = 3 / number of data axesNAXIS1 = 720 / length of data axis 1NAXIS2 = 360 / length of data axis 2NAXIS3 = 30 / length of data axis 3EXTEND = T / FITS dataset may contain extensionsCOMMENT FITS (Flexible Image Transport System) format is defined in 'AstronomyCOMMENT and Astrophysics', volume 376, page 359; bibcode: 2001A&A...376..359HFLUX = 8.42259635886 /CRVAL1 = 0. / Value of longitude in pixel CRPIX1CDELT1 = 0.5 / Step size in longitudeCRPIX1 = 360.5 / Pixel that has value CRVAL1CTYPE1 = 'GLON-CAR' / The type of parameter 1 (Galactic longitude inCUNIT1 = 'deg ' / The unit of parameter 1CRVAL2 = 0. / Value of latitude in pixel CRPIX2CDELT2 = 0.5 / Step size in latitudeCRPIX2 = 180.5 / Pixel that has value CRVAL2CTYPE2 = 'GLAT-CAR' / The type of parameter 2 (Galactic latitude in CCUNIT2 = 'deg ' / The unit of parameter 2CRVAL3 = 50. / Energy of pixel CRPIX3CDELT3 = 0.113828620540137 / log10 of step size in energy (if it is logarithCRPIX3 = 1. / Pixel that has value CRVAL3CTYPE3 = 'photon energy' / Axis 3 is the spectraCUNIT3 = 'MeV ' / The unit of axis 3CHECKSUM= '3fdO3caL3caL3caL' / HDU checksum updated 2009-07-07T22:31:18DATASUM = '2184619035' / data unit checksum updated 2009-07-07T22:31:18DATE = '2009-07-07' /FILENAME= '$TEMPDIR/diffuse/gll_iem_v02.fit' /File name with version numberTELESCOP= 'GLAST ' /INSTRUME= 'LAT ' /ORIGIN = 'LISOC ' /LAT team product delivered from the LISOCOBSERVER= 'MICHELSON' /Instrument PIHISTORY Scaled version of gll_iem_v02.fit for use with P6_V11_DIFFUSE

which shows that this is a Plate Carrée (-CAR) projection in Galactic
Coordinates, and the third axis is photon energy. We can access individual
header keywords using standard item notation:

>>> hdu.header['TELESCOP']'GLAST'>>> hdu.header['INSTRUME']'LAT'

Provided that we started up ipython with the --matplotlib flag and did
import matplotlib.pyplot as plt, we can plot
one of the slices in photon energy:

>>> plt.imshow(hdu.data[0,:,:], origin='lower')

which gives:

../_images/lat_background_slice0.png

Note that this is just a plot of an array, so the coordinates are just pixel
coordinates at this stage. The data is stored with longitude increasing to the
right (the opposite of the normal convention), but the Level 3 problem at the
bottom of this page shows how to correctly flip the image.

Modifying data or header information in a FITS file object is easy. We can
update existing header keywords:

>>> hdu.header['TELESCOP'] = "Fermi Gamma-ray Space Telescope"

or add new ones:

>>> hdu.header['MODIFIED'] = '26 Feb 2013' # adds a new keyword

and we can also change the data, for example extracting only the first slice
in photon energy:

>>> hdu.data = hdu.data[0,:,:]

Note that this does not change the original FITS file, simply the FITS file
object in memory. Note that since the data is now 2-dimensional, we can remove the WCS keywords for the third dimension:

hdu.header.remove('CRPIX3')hdu.header.remove('CRVAL3')hdu.header.remove('CDELT3')hdu.header.remove('CUNIT3')hdu.header.remove('CTYPE3')

You can write the FITS file object to a file with:

>>> hdu.writeto('lat_background_model_slice.fits')

if you want to simply write out this HDU to a file, or:

>>> hdulist.writeto('lat_background_model_slice_allhdus.fits')

if you want to write out all of the original HDUs, including the modified one,
to a file.

Creating a FITS file from scratch ¶

If you want to create a FITS file from scratch, you need to start off by creating an HDU object:

>>> hdu = fits.PrimaryHDU()

and you can then populate the data and header attributes with whatever information you like:

>>> import numpy as np>>> hdu.data = np.random.random((128,128))

Note that setting the data automatically populates the header with basic information:

>>> hdu.headerSIMPLE = T / conforms to FITS standardBITPIX = -64 / array data typeNAXIS = 2 / number of array dimensionsNAXIS1 = 128NAXIS2 = 128EXTEND = T

and you should never have to set header keywords such as NAXIS, NAXIS1, and so on manually. We can then set additional header keywords:

>>> hdu.header['telescop'] = 'Python Observatory'

and we can then write out the FITS file to disk:

>>> hdu.writeto('random_array.fits')

If the file already exists, you can overwrite it with:

>>> hdu.writeto('random_array.fits', clobber=True)

Convenience functions ¶

In cases where you just want to access the data or header in a specific HDU,
you can use the following convenience functions:

>>> data = fits.getdata('gll_iem_v02_P6_V11_DIFFUSE.fit')>>> header = fits.getheader('gll_iem_v02_P6_V11_DIFFUSE.fit')

To get the data or header for an HDU other than the first, you can specify the
extension name or index. The second HDU is called energies, so we can do:

>>> data = fits.getdata('gll_iem_v02_P6_V11_DIFFUSE.fit', extname='energies')

or:

>>> data = fits.getdata('gll_iem_v02_P6_V11_DIFFUSE.fit', ext=1)

and similarly for getheader.

Accessing Tabular Data ¶

In Astropy 0.2, FITS tables cannot be read/written directly from the Table
class. To create a Table object from a FITS table, you can use
astropy.io.fits :

>>> from astropy.io import fits>>> from astropy.table import Table>>> data = fits.getdata('catalog.fits', 1)>>> t = Table(data)

and to write out, you can use astropy.io.fits , converting the table to a
Numpy array:

>>> fits.writeto('new_catalog.fits', np.array(t))

The main drawback of the current approach is that table metadata like UCDs and
other FITS header keywords are lost. Future versions of Astropy will support
reading/writing FITS tables directly from the Table class.

Practical Exercises ¶

Excercise

Read in the LAT Point Source Catalog and make a scatter plot of the
Galactic Coordinates of the sources (complete with axis labels). Bonus
points if you can make the plot go between -180 and 180 instead of 0 and
360 degrees. Note that the Point Source Catalog contains the Galactic
Coordinates, so no need to convert them.

Click to Show/Hide Solution

from astropy.io import fitsfrom astropy.table import Tablefrom matplotlib import pyplot as plt# Read in Point Source Catalogdata = fits.getdata('gll_psc_v08.fit',1)psc = Table(data)# Extract Galactic Coordinatesl = psc['GLON']b = psc['GLAT']# Coordinates from 0 to 360, wrap to -180 to 180 to match imagel[l > 180.] -= 360.# Plot the imagefig = plt.figure()ax = fig.add_subplot(1, 1, 1, aspect='equal')ax.scatter(l, b)ax.set_xlim(180., -180.)ax.set_ylim(-90., 90.)ax.set_xlabel('Galactic Longitude')ax.set_ylabel('Galactic Latitude')fig.savefig('fits_level2.png', bbox_inches='tight')

../_images/fits_level2.png

Advanced exercise

Using Matplotlib, make an all-sky plot of the LAT Background Model in the
Plate Carée projection showing the LAT Point Source Catalog overlaid with
markers, and with the correct coordinates on the axes. You should do this
using only astropy.io.fits , Numpy, and Matplotlib (no WCS or
coordinate conversion library). Hint: the -CAR projection is such that the
x pixel position is proportional to longitude, and the y pixel position to
latitude. Bonus points for a pretty colormap.

Click to Show/Hide Solution

# this continues from the previous exercise# Read in Background Modelhdulist = fits.open('gll_iem_v02_P6_V11_DIFFUSE.fit')bg = hdulist[0].data[0, :, :]# Plot the imagefig = plt.figure()ax = fig.add_subplot(1, 1, 1)ax.imshow(bg ** 0.5, extent=[-180., 180., -90., 90.], cmap=plt.cm.gist_heat, origin='lower', vmin=0, vmax=2e-3)ax.scatter(l, b, s=10, edgecolor='none', facecolor='blue', alpha=0.5)ax.set_xlim(180., -180.)ax.set_ylim(-90., 90.)ax.set_xlabel('Galactic Longitude')ax.set_ylabel('Galactic Latitude')fig.savefig('fits_level3.png', bbox_inches='tight')

../_images/fits_level3.png

Page Contents

  • Handling FITS files
    • Documentation
    • Data
    • Reading FITS files and accessing data
    • Creating a FITS file from scratch
    • Convenience functions
    • Accessing Tabular Data
    • Practical Exercises

Previous topic

Tabular data

Next topic

WCS Transformations