Module cosmics
[frames] | no frames]

Source Code for Module cosmics

  1  """ 
  2  About 
  3  ===== 
  4   
  5  cosmics.py is a small and simple python module to detect and clean cosmic ray hits on images (numpy arrays or FITS), using scipy, and based on Pieter van Dokkum's L.A.Cosmic algorithm. 
  6   
  7  L.A.Cosmic = Laplacian cosmic ray detection 
  8   
  9  U{http://www.astro.yale.edu/dokkum/lacosmic/} 
 10   
 11  (article : U{http://arxiv.org/abs/astro-ph/0108003}) 
 12   
 13   
 14  Additional features 
 15  =================== 
 16   
 17  I pimped this a bit to suit my needs : 
 18   
 19          - Automatic recognition of saturated stars, including their full saturation trails. 
 20          This avoids that such stars are treated as big cosmics. 
 21          Indeed saturated stars tend to get even uglier when you try to clean them. Plus they 
 22          keep L.A.Cosmic iterations going on forever. 
 23          This feature is mainly for pretty-image production. It is optional, requires one more parameter (a CCD saturation level in ADU), and uses some  
 24          nicely robust morphology operations and object extraction. 
 25           
 26          - Scipy image analysis allows to "label" the actual cosmic ray hits (i.e. group the pixels into local islands). 
 27          A bit special, but I use this in the scope of visualizing a PSF construction. 
 28   
 29  But otherwise the core is really a 1-to-1 implementation of L.A.Cosmic, and uses the same parameters. 
 30  Only the conventions on how filters are applied at the image edges might be different. 
 31   
 32  No surprise, this python module is much faster then the IRAF implementation, as it does not read/write every step to disk. 
 33   
 34  Usage 
 35  ===== 
 36   
 37  Everything is in the file cosmics.py, all you need to do is to import it. You need pyfits, numpy and scipy. 
 38  See the demo scripts for example usages (the second demo uses f2n.py to make pngs, and thus also needs PIL). 
 39   
 40  Your image should have clean borders, cut away prescan/overscan etc. 
 41   
 42   
 43   
 44  Todo 
 45  ==== 
 46  Ideas for future improvements : 
 47   
 48          - Add something reliable to detect negative glitches (dust on CCD or small traps) 
 49          - Top level functions to simply run all this on either numpy arrays or directly on FITS files 
 50          - Reduce memory usage ... easy 
 51          - Switch from signal to ndimage, homogenize mirror boundaries 
 52   
 53   
 54  Malte Tewes, January 2010 
 55  """ 
 56   
 57  __version__ = '0.4' 
 58   
 59  import os 
 60  import numpy as np 
 61  import math 
 62  import scipy.signal as signal 
 63  import scipy.ndimage as ndimage 
 64  import pyfits 
 65   
 66   
 67   
 68  # We define the laplacian kernel to be used 
 69  laplkernel = np.array([[0.0, -1.0, 0.0], [-1.0, 4.0, -1.0], [0.0, -1.0, 0.0]]) 
 70  # Other kernels : 
 71  growkernel = np.ones((3,3)) 
 72  dilstruct = np.ones((5,5))      # dilation structure for some morphological operations 
 73  dilstruct[0,0] = 0 
 74  dilstruct[0,4] = 0 
 75  dilstruct[4,0] = 0 
 76  dilstruct[4,4] = 0 
 77  # So this dilstruct looks like : 
 78  #       01110 
 79  #       11111 
 80  #       11111 
 81  #       11111 
 82  #       01110 
 83  # and is used to dilate saturated stars and connect cosmic rays. 
 84   
 85           
86 -class cosmicsimage:
87
88 - def __init__(self, rawarray, pssl=0.0, gain=2.2, readnoise=10.0, sigclip = 5.0, sigfrac = 0.3, objlim = 5.0, satlevel = 50000.0, verbose=True):
89 """ 90 91 sigclip : increase this if you detect cosmics where there are none. Default is 5.0, a good value for earth-bound images. 92 objlim : increase this if normal stars are detected as cosmics. Default is 5.0, a good value for earth-bound images. 93 94 Constructor of the cosmic class, takes a 2D numpy array of your image as main argument. 95 sigclip : laplacian-to-noise limit for cosmic ray detection 96 objlim : minimum contrast between laplacian image and fine structure image. Use 5.0 if your image is undersampled, HST, ... 97 98 satlevel : if we find agglomerations of pixels above this level, we consider it to be a saturated star and 99 do not try to correct and pixels around it. A negative satlevel skips this feature. 100 101 pssl is the previously subtracted sky level ! 102 103 real gain = 1.8 # gain (electrons/ADU) (0=unknown) 104 real readn = 6.5 # read noise (electrons) (0=unknown) 105 ##gain0 string statsec = "*,*" # section to use for automatic computation of gain 106 real skyval = 0. # sky level that has been subtracted (ADU) 107 real sigclip = 3.0 # detection limit for cosmic rays (sigma) 108 real sigfrac = 0.5 # fractional detection limit for neighbouring pixels 109 real objlim = 3.0 # contrast limit between CR and underlying object 110 int niter = 1 # maximum number of iterations 111 112 """ 113 self.rawarray = rawarray + pssl # internally, we will always work "with sky". 114 self.cleanarray = self.rawarray.copy() # In lacosmiciteration() we work on this guy 115 self.mask = np.cast['bool'](np.zeros(self.rawarray.shape)) # All False, no cosmics yet 116 117 self.gain = gain 118 self.readnoise = readnoise 119 self.sigclip = sigclip 120 self.objlim = objlim 121 self.sigcliplow = sigclip * sigfrac 122 self.satlevel = satlevel 123 124 self.verbose = verbose 125 126 self.pssl = pssl 127 128 self.backgroundlevel = None # only calculated and used if required. 129 self.satstars = None # a mask of the saturated stars, only calculated if required
130
131 - def __str__(self):
132 """ 133 Gives a summary of the current state, including the number of cosmic pixels in the mask etc. 134 """ 135 stringlist = [ 136 "Input array : (%i, %i), %s" % (self.rawarray.shape[0], self.rawarray.shape[1], self.rawarray.dtype.name), 137 "Current cosmic ray mask : %i pixels" % np.sum(self.mask) 138 ] 139 140 if self.pssl != 0.0: 141 stringlist.append("Using a previously subtracted sky level of %f" % self.pssl) 142 143 if self.satstars != None: 144 stringlist.append("Saturated star mask : %i pixels" % np.sum(self.satstars)) 145 146 return "\n".join(stringlist)
147
148 - def labelmask(self, verbose = None):
149 """ 150 Finds and labels the cosmic "islands" and returns a list of dicts containing their positions. 151 This is made on purpose for visualizations a la f2n.drawstarslist, but could be useful anyway. 152 """ 153 if verbose == None: 154 verbose = self.verbose 155 if verbose: 156 print "Labeling mask pixels ..." 157 # We morphologicaly dilate the mask to generously connect "sparse" cosmics : 158 #dilstruct = np.ones((5,5)) 159 dilmask = ndimage.morphology.binary_dilation(self.mask, structure=dilstruct, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False) 160 # origin = 0 means center 161 (labels, n) = ndimage.measurements.label(dilmask) 162 #print "Number of cosmic ray hits : %i" % n 163 #tofits(labels, "labels.fits", verbose = False) 164 slicecouplelist = ndimage.measurements.find_objects(labels) 165 # Now we have a huge list of couples of numpy slice objects giving a frame around each object 166 # For plotting purposes, we want to transform this into the center of each object. 167 if len(slicecouplelist) != n: 168 # This never happened, but you never know ... 169 raise RuntimeError, "Mega error in labelmask !" 170 centers = [[(tup[0].start + tup[0].stop)/2.0, (tup[1].start + tup[1].stop)/2.0] for tup in slicecouplelist] 171 # We also want to know how many pixels where affected by each cosmic ray. 172 # Why ? Dunno... it's fun and available in scipy :-) 173 sizes = ndimage.measurements.sum(self.mask.ravel(), labels.ravel(), np.arange(1,n+1,1)) 174 retdictlist = [{"name":"%i" % size, "x":center[0], "y":center[1]} for (size, center) in zip(sizes, centers)] 175 176 if verbose: 177 print "Labeling done" 178 179 return retdictlist
180 181
182 - def getdilatedmask(self, size=3):
183 """ 184 Returns a morphologically dilated copy of the current mask. 185 size = 3 or 5 decides how to dilate. 186 """ 187 if size == 3: 188 dilmask = ndimage.morphology.binary_dilation(self.mask, structure=growkernel, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False) 189 elif size == 5: 190 dilmask = ndimage.morphology.binary_dilation(self.mask, structure=dilstruct, iterations=1, mask=None, output=None, border_value=0, origin=0, brute_force=False) 191 else: 192 dismask = self.mask.copy() 193 194 return dilmask
195 196
197 - def clean(self, mask = None, verbose = None):
198 """ 199 Given the mask, we replace the actual problematic pixels with the masked 5x5 median value. 200 This mimics what is done in L.A.Cosmic, but it's a bit harder to do in python, as there is no 201 readymade masked median. So for now we do a loop... 202 Saturated stars, if calculated, are also masked : they are not "cleaned", but their pixels are not 203 used for the interpolation. 204 205 We will directly change self.cleanimage. Instead of using the self.mask, you can supply your 206 own mask as argument. This might be useful to apply this cleaning function iteratively. 207 But for the true L.A.Cosmic, we don't use this, i.e. we use the full mask at each iteration. 208 209 """ 210 if verbose == None: 211 verbose = self.verbose 212 if mask == None: 213 mask = self.mask 214 215 if verbose: 216 print "Cleaning cosmic affected pixels ..." 217 218 # So... mask is a 2D array containing False and True, where True means "here is a cosmic" 219 # We want to loop through these cosmics one by one. 220 cosmicindices = np.argwhere(mask) 221 # This is a list of the indices of cosmic affected pixels. 222 #print cosmicindices 223 224 # We put cosmic ray pixels to np.Inf to flag them : 225 self.cleanarray[mask] = np.Inf 226 227 # Now we want to have a 2 pixel frame of Inf padding around our image. 228 w = self.cleanarray.shape[0] 229 h = self.cleanarray.shape[1] 230 padarray = np.zeros((w+4,h+4))+np.Inf 231 padarray[2:w+2,2:h+2] = self.cleanarray.copy() # that copy is important, we need 2 independent arrays 232 233 # The medians will be evaluated in this padarray, skipping the np.Inf. 234 # Now in this copy called padarray, we also put the saturated stars to np.Inf, if available : 235 if self.satstars != None: 236 padarray[2:w+2,2:h+2][self.satstars] = np.Inf 237 # Viva python, I tested this one, it works... 238 239 # A loop through every cosmic pixel : 240 for cosmicpos in cosmicindices: 241 x = cosmicpos[0] 242 y = cosmicpos[1] 243 cutout = padarray[x:x+5, y:y+5].ravel() # remember the shift due to the padding ! 244 #print cutout 245 # Now we have our 25 pixels, some of them are np.Inf, and we want to take the median 246 goodcutout = cutout[cutout != np.Inf] 247 #print np.alen(goodcutout) 248 249 if np.alen(goodcutout) >= 25 : 250 # This never happened, but you never know ... 251 raise RuntimeError, "Mega error in clean !" 252 elif np.alen(goodcutout) > 0 : 253 replacementvalue = np.median(goodcutout) 254 else : 255 # i.e. no good pixels : Shit, a huge cosmic, we will have to improvise ... 256 print "OH NO, I HAVE A HUUUUUUUGE COSMIC !!!!!" 257 replacementvalue = self.guessbackgroundlevel() 258 259 # We update the cleanarray, 260 # but measure the medians in the padarray, so to not mix things up... 261 self.cleanarray[x, y] = replacementvalue 262 263 # That's it. 264 if verbose: 265 print "Cleaning done" 266 267 # FYI, that's how the LACosmic cleaning looks in iraf : 268 """ 269 imarith(outmask,"+",finalsel,outmask) 270 imreplace(outmask,1,lower=1,upper=INDEF) # ok so outmask = 1 are the cosmics 271 imcalc(outmask,inputmask,"(1.-10000.*im1)",verb-) 272 imarith(oldoutput,"*",inputmask,inputmask) 273 median(inputmask,med5,5,5,zloreject=-9999,zhi=INDEF,verb-) 274 imarith(outmask,"*",med5,med5) 275 if (i>1) imdel(output) 276 imcalc(oldoutput//","//outmask//","//med5,output,"(1.-im2)*im1+im3",verb-) 277 278 # = 279 280 merging to full mask 281 inputmask = 1.0 - 10000.0 * finalsel # So this is 1.0, but cosmics are very negative 282 inputmask = oldoutput * inputmask # orig image, with very negative cosmics 283 med5 = median of inputmask, but rejecting these negative cosmics 284 # i dunno how to do this in python -> had to do the loop 285 med5 = finalsel * med5 # we keep only the cosmics of this median 286 # actual replacement : 287 output = (1.0 - outmask)*oldoutput + med5 # ok 288 """
289
290 - def findsatstars(self, verbose = None):
291 """ 292 Uses the satlevel to find saturated stars (not cosmics !), and puts the result as a mask in self.satstars. 293 This can then be used to avoid these regions in cosmic detection and cleaning procedures. 294 Slow ... 295 """ 296 if verbose == None: 297 verbose = self.verbose 298 if verbose: 299 print "Detecting saturated stars ..." 300 # DETECTION 301 302 satpixels = self.rawarray > self.satlevel # the candidate pixels 303 304 # We build a smoothed version of the image to look for large stars and their support : 305 m5 = ndimage.filters.median_filter(self.rawarray, size=5, mode='mirror') 306 # We look where this is above half the satlevel 307 largestruct = m5 > (self.satlevel/2.0) 308 # The rough locations of saturated stars are now : 309 satstarscenters = np.logical_and(largestruct, satpixels) 310 311 if verbose: 312 print "Building mask of saturated stars ..." 313 314 # BUILDING THE MASK 315 # The subtility is that we want to include all saturated pixels connected to these saturated stars... 316 # I haven't found a better solution then the double loop 317 318 # We dilate the satpixels alone, to ensure connectivity in glitchy regions and to add a safety margin around them. 319 #dilstruct = np.array([[0,1,0], [1,1,1], [0,1,0]]) 320 321 dilsatpixels = ndimage.morphology.binary_dilation(satpixels, structure=dilstruct, iterations=2, mask=None, output=None, border_value=0, origin=0, brute_force=False) 322 # It turns out it's better to think large and do 2 iterations... 323 324 325 # We label these : 326 (dilsatlabels, nsat) = ndimage.measurements.label(dilsatpixels) 327 #tofits(dilsatlabels, "test.fits") 328 329 if verbose: 330 print "We have %i saturated stars." % nsat 331 332 # The ouput, False for now : 333 outmask = np.zeros(self.rawarray.shape) 334 335 for i in range(1,nsat+1): # we go through the islands of saturated pixels 336 thisisland = dilsatlabels == i # gives us a boolean array 337 # Does this intersect with satstarscenters ? 338 overlap = np.logical_and(thisisland, satstarscenters) 339 if np.sum(overlap) > 0: 340 outmask = np.logical_or(outmask, thisisland) # we add thisisland to the mask 341 342 self.satstars = np.cast['bool'](outmask) 343 344 if verbose: 345 print "Mask of saturated stars done"
346
347 - def getsatstars(self, verbose = None):
348 """ 349 Returns the mask of saturated stars after finding them if not yet done. 350 Intended mainly for external use. 351 """ 352 if verbose == None: 353 verbose = self.verbose 354 if not self.satlevel > 0: 355 raise RuntimeError, "Cannot determine satstars : you gave satlevel <= 0 !" 356 if self.satstars == None: 357 self.findsatstars(verbose = verbose) 358 return self.satstars
359
360 - def getmask(self):
361 return self.mask
362
363 - def getrawarray(self):
364 """ 365 For external use only, as it returns the rawarray minus pssl ! 366 """ 367 return self.rawarray - self.pssl
368
369 - def getcleanarray(self):
370 """ 371 For external use only, as it returns the cleanarray minus pssl ! 372 """ 373 return self.cleanarray - self.pssl
374 375
376 - def guessbackgroundlevel(self):
377 """ 378 Estimates the background level. This could be used to fill pixels in large cosmics. 379 """ 380 if self.backgroundlevel == None: 381 self.backgroundlevel = np.median(self.rawarray.ravel()) 382 return self.backgroundlevel
383 384
385 - def lacosmiciteration(self, verbose = None):
386 """ 387 Performs one iteration of the L.A.Cosmic algorithm. 388 It operates on self.cleanarray, and afterwards updates self.mask by adding the newly detected 389 cosmics to the existing self.mask. Cleaning is not made automatically ! You have to call 390 clean() after each iteration. 391 This way you can run it several times in a row to to L.A.Cosmic "iterations". 392 See function lacosmic, that mimics the full iterative L.A.Cosmic algorithm. 393 394 Returns a dict containing 395 - niter : the number of cosmic pixels detected in this iteration 396 - nnew : among these, how many were not yet in the mask 397 - itermask : the mask of pixels detected in this iteration 398 - newmask : the pixels detected that were not yet in the mask 399 400 If findsatstars() was called, we exclude these regions from the search. 401 402 """ 403 404 if verbose == None: 405 verbose = self.verbose 406 407 if verbose: 408 print "Convolving image with Laplacian kernel ..." 409 410 # We subsample, convolve, clip negative values, and rebin to original size 411 subsam = subsample(self.cleanarray) 412 conved = signal.convolve2d(subsam, laplkernel, mode="same", boundary="symm") 413 cliped = conved.clip(min=0.0) 414 #cliped = np.abs(conved) # unfortunately this does not work to find holes as well ... 415 lplus = rebin2x2(cliped) 416 417 if verbose: 418 print "Creating noise model ..." 419 420 # We build a custom noise map, so to compare the laplacian to 421 m5 = ndimage.filters.median_filter(self.cleanarray, size=5, mode='mirror') 422 # We keep this m5, as I will use it later for the interpolation. 423 m5clipped = m5.clip(min=0.00001) # As we will take the sqrt 424 noise = (1.0/self.gain) * np.sqrt(self.gain*m5clipped + self.readnoise*self.readnoise) 425 426 if verbose: 427 print "Calculating Laplacian signal to noise ratio ..." 428 429 # Laplacian signal to noise ratio : 430 s = lplus / (2.0 * noise) # the 2.0 is from the 2x2 subsampling 431 # This s is called sigmap in the original lacosmic.cl 432 433 # We remove the large structures (s prime) : 434 sp = s - ndimage.filters.median_filter(s, size=5, mode='mirror') 435 436 if verbose: 437 print "Selecting candidate cosmic rays ..." 438 439 # Candidate cosmic rays (this will include stars + HII regions) 440 candidates = sp > self.sigclip 441 nbcandidates = np.sum(candidates) 442 443 if verbose: 444 print " %5i candidate pixels" % nbcandidates 445 446 # At this stage we use the saturated stars to mask the candidates, if available : 447 if self.satstars != None: 448 if verbose: 449 print "Masking saturated stars ..." 450 candidates = np.logical_and(np.logical_not(self.satstars), candidates) 451 nbcandidates = np.sum(candidates) 452 453 if verbose: 454 print " %5i candidate pixels not part of saturated stars" % nbcandidates 455 456 if verbose: 457 print "Building fine structure image ..." 458 459 # We build the fine structure image : 460 m3 = ndimage.filters.median_filter(self.cleanarray, size=3, mode='mirror') 461 m37 = ndimage.filters.median_filter(m3, size=7, mode='mirror') 462 f = m3 - m37 463 # In the article that's it, but in lacosmic.cl f is divided by the noise... 464 # Ok I understand why, it depends on if you use sp/f or L+/f as criterion. 465 # There are some differences between the article and the iraf implementation. 466 # So I will stick to the iraf implementation. 467 f = f / noise 468 f = f.clip(min=0.01) # as we will divide by f. like in the iraf version. 469 470 if verbose: 471 print "Removing suspected compact bright objects ..." 472 473 # Now we have our better selection of cosmics : 474 cosmics = np.logical_and(candidates, sp/f > self.objlim) 475 # Note the sp/f and not lplus/f ... due to the f = f/noise above. 476 477 nbcosmics = np.sum(cosmics) 478 479 if verbose: 480 print " %5i remaining candidate pixels" % nbcosmics 481 482 # What follows is a special treatment for neighbors, with more relaxed constains. 483 484 if verbose: 485 print "Finding neighboring pixels affected by cosmic rays ..." 486 487 # We grow these cosmics a first time to determine the immediate neighborhod : 488 growcosmics = np.cast['bool'](signal.convolve2d(np.cast['float32'](cosmics), growkernel, mode="same", boundary="symm")) 489 490 # From this grown set, we keep those that have sp > sigmalim 491 # so obviously not requiring sp/f > objlim, otherwise it would be pointless 492 growcosmics = np.logical_and(sp > self.sigclip, growcosmics) 493 494 # Now we repeat this procedure, but lower the detection limit to sigmalimlow : 495 496 finalsel = np.cast['bool'](signal.convolve2d(np.cast['float32'](growcosmics), growkernel, mode="same", boundary="symm")) 497 finalsel = np.logical_and(sp > self.sigcliplow, finalsel) 498 499 # Again, we have to kick out pixels on saturated stars : 500 if self.satstars != None: 501 if verbose: 502 print "Masking saturated stars ..." 503 finalsel = np.logical_and(np.logical_not(self.satstars), finalsel) 504 505 nbfinal = np.sum(finalsel) 506 507 if verbose: 508 print " %5i pixels detected as cosmics" % nbfinal 509 510 # Now the replacement of the cosmics... 511 # we outsource this to the function clean(), as for some purposes the cleaning might not even be needed. 512 # Easy way without masking would be : 513 #self.cleanarray[finalsel] = m5[finalsel] 514 515 # We find how many cosmics are not yet known : 516 newmask = np.logical_and(np.logical_not(self.mask), finalsel) 517 nbnew = np.sum(newmask) 518 519 # We update the mask with the cosmics we have found : 520 self.mask = np.logical_or(self.mask, finalsel) 521 522 # We return 523 # (used by function lacosmic) 524 525 return {"niter":nbfinal, "nnew":nbnew, "itermask":finalsel, "newmask":newmask}
526
527 - def findholes(self, verbose = True):
528 """ 529 Detects "negative cosmics" in the cleanarray and adds them to the mask. 530 This is not working yet. 531 """ 532 pass 533 534 """ 535 if verbose == None: 536 verbose = self.verbose 537 538 if verbose : 539 print "Finding holes ..." 540 541 m3 = ndimage.filters.median_filter(self.cleanarray, size=3, mode='mirror') 542 h = (m3 - self.cleanarray).clip(min=0.0) 543 544 tofits("h.fits", h) 545 sys.exit() 546 547 # The holes are the peaks in this image that are not stars 548 549 #holes = h > 300 550 """ 551 """ 552 subsam = subsample(self.cleanarray) 553 conved = -signal.convolve2d(subsam, laplkernel, mode="same", boundary="symm") 554 cliped = conved.clip(min=0.0) 555 lplus = rebin2x2(conved) 556 557 tofits("lplus.fits", lplus) 558 559 m5 = ndimage.filters.median_filter(self.cleanarray, size=5, mode='mirror') 560 m5clipped = m5.clip(min=0.00001) 561 noise = (1.0/self.gain) * np.sqrt(self.gain*m5clipped + self.readnoise*self.readnoise) 562 563 s = lplus / (2.0 * noise) # the 2.0 is from the 2x2 subsampling 564 # This s is called sigmap in the original lacosmic.cl 565 566 # We remove the large structures (s prime) : 567 sp = s - ndimage.filters.median_filter(s, size=5, mode='mirror') 568 569 holes = sp > self.sigclip 570 """ 571 """ 572 # We have to kick out pixels on saturated stars : 573 if self.satstars != None: 574 if verbose: 575 print "Masking saturated stars ..." 576 holes = np.logical_and(np.logical_not(self.satstars), holes) 577 578 if verbose: 579 print "%i hole pixels found" % np.sum(holes) 580 581 # We update the mask with the holes we have found : 582 self.mask = np.logical_or(self.mask, holes) 583 """
584
585 - def run(self, maxiter = 4, verbose = False):
586 """ 587 Full artillery :-) 588 - Find saturated stars 589 - Run maxiter L.A.Cosmic iterations (stops if no more cosmics are found) 590 591 Stops if no cosmics are found or if maxiter is reached. 592 """ 593 594 if self.satlevel > 0 and self.satstars == None: 595 self.findsatstars(verbose=True) 596 597 print "Starting %i L.A.Cosmic iterations ..." % maxiter 598 for i in range(1, maxiter+1): 599 print "Iteration %i" % i 600 601 iterres = self.lacosmiciteration(verbose=verbose) 602 print "%i cosmic pixels (%i new)" % (iterres["niter"], iterres["nnew"]) 603 604 #self.clean(mask = iterres["mask"]) # No, we want clean to operate on really clean pixels only ! 605 # Thus we always apply it on the full mask, as lacosmic does : 606 self.clean(verbose=verbose) 607 # But note that for huge cosmics, one might want to revise this. 608 # Thats why I added a feature to skip saturated stars ! 609 610 if iterres["niter"] == 0: 611 break
612 613 614 # Top-level functions 615 616 617 # def fullarray(verbose = False): 618 # """ 619 # Applies the full artillery using and returning only numpy arrays 620 # """ 621 # pass 622 # 623 # def fullfits(infile, outcleanfile = None, outmaskfile = None): 624 # """ 625 # Applies the full artillery of the function fullarray() directly on FITS files. 626 # """ 627 # pass 628 629 630 631 # FITS import - export 632
633 -def fromfits(infilename, hdu = 0, verbose = True):
634 """ 635 Reads a FITS file and returns a 2D numpy array of the data. 636 Use hdu to specify which HDU you want (default = primary = 0) 637 """ 638 639 pixelarray, hdr = pyfits.getdata(infilename, hdu, header=True) 640 pixelarray = np.asarray(pixelarray).transpose() 641 642 pixelarrayshape = pixelarray.shape 643 if verbose : 644 print "FITS import shape : (%i, %i)" % (pixelarrayshape[0], pixelarrayshape[1]) 645 print "FITS file BITPIX : %s" % (hdr["BITPIX"]) 646 print "Internal array type :", pixelarray.dtype.name 647 648 return pixelarray, hdr
649
650 -def tofits(outfilename, pixelarray, hdr = None, verbose = True):
651 """ 652 Takes a 2D numpy array and write it into a FITS file. 653 If you specify a header (pyfits format, as returned by fromfits()) it will be used for the image. 654 You can give me boolean numpy arrays, I will convert them into 8 bit integers. 655 """ 656 pixelarrayshape = pixelarray.shape 657 if verbose : 658 print "FITS export shape : (%i, %i)" % (pixelarrayshape[0], pixelarrayshape[1]) 659 660 if pixelarray.dtype.name == "bool": 661 pixelarray = np.cast["uint8"](pixelarray) 662 663 if os.path.isfile(outfilename): 664 os.remove(outfilename) 665 666 if hdr == None: # then a minimal header will be created 667 hdu = pyfits.PrimaryHDU(pixelarray.transpose()) 668 else: # this if else is probably not needed but anyway ... 669 hdu = pyfits.PrimaryHDU(pixelarray.transpose(), hdr) 670 671 hdu.writeto(outfilename) 672 673 if verbose : 674 print "Wrote %s" % outfilename
675 676 677 # Array manipulation 678
679 -def subsample(a): # this is more a generic function then a method ...
680 """ 681 Returns a 2x2-subsampled version of array a (no interpolation, just cutting pixels in 4). 682 The version below is directly from the scipy cookbook on rebinning : 683 U{http://www.scipy.org/Cookbook/Rebinning} 684 There is ndimage.zoom(cutout.array, 2, order=0, prefilter=False), but it makes funny borders. 685 686 """ 687 """ 688 # Ouuwww this is slow ... 689 outarray = np.zeros((a.shape[0]*2, a.shape[1]*2), dtype=np.float64) 690 for i in range(a.shape[0]): 691 for j in range(a.shape[1]): 692 outarray[2*i,2*j] = a[i,j] 693 outarray[2*i+1,2*j] = a[i,j] 694 outarray[2*i,2*j+1] = a[i,j] 695 outarray[2*i+1,2*j+1] = a[i,j] 696 return outarray 697 """ 698 # much better : 699 newshape = (2*a.shape[0], 2*a.shape[1]) 700 slices = [slice(0,old, float(old)/new) for old,new in zip(a.shape,newshape) ] 701 coordinates = np.mgrid[slices] 702 indices = coordinates.astype('i') #choose the biggest smaller integer index 703 return a[tuple(indices)] 704 705 706 707
708 -def rebin(a, newshape):
709 """ 710 Auxiliary function to rebin an ndarray a. 711 U{http://www.scipy.org/Cookbook/Rebinning} 712 713 >>> a=rand(6,4); b=rebin(a,(3,2)) 714 """ 715 716 shape = a.shape 717 lenShape = len(shape) 718 factor = np.asarray(shape)/np.asarray(newshape) 719 #print factor 720 evList = ['a.reshape('] + \ 721 ['newshape[%d],factor[%d],'%(i,i) for i in xrange(lenShape)] + \ 722 [')'] + ['.sum(%d)'%(i+1) for i in xrange(lenShape)] + \ 723 ['/factor[%d]'%i for i in xrange(lenShape)] 724 725 return eval(''.join(evList))
726 727
728 -def rebin2x2(a):
729 """ 730 Wrapper around rebin that actually rebins 2 by 2 731 """ 732 inshape = np.array(a.shape) 733 if not (inshape % 2 == np.zeros(2)).all(): # Modulo check to see if size is even 734 raise RuntimeError, "I want even image shapes !" 735 736 return rebin(a, inshape/2)
737