AlbumShaper 1.0a3
redEye.cpp
Go to the documentation of this file.
1//==============================================
2// copyright : (C) 2003-2005 by Will Stokes
3//==============================================
4// This program is free software; you can redistribute it
5// and/or modify it under the terms of the GNU General
6// Public License as published by the Free Software
7// Foundation; either version 2 of the License, or
8// (at your option) any later version.
9//==============================================
10
11//Systemwide includes
12#include <qimage.h>
13#include <qstring.h>
14#include <qapplication.h>
15
16//Projectwide includes
17#include "redEye.h"
18#include "redEye_internal.h"
20
21//----------------------------------------------
22// Inputs:
23// -------
24// QString filename - location of original image on disk
25// QPoint topLeftExtreme - top left constraint
26// QPoint bottomRightExtreme - botth right constraint
27// StatusWidget* status - widget for making progress visible to user
28//
29// Outputs:
30// --------
31// QImage* returned - enhanced image
32//
33// Description:
34// ------------
35// There are a lot of programs out there that provide some sort of
36// red eye tool, but to put it bluntly, most of them really suck.
37// To be fair, the red-eye flash function on most digital cameras (my own
38// Olympus 3030z included) suck too.
39//
40// "Such foolishness, what can men do against such reckless stupidity?"
41// -unknown
42//
43// Well, here I try to provide a better red-eye tool by studying those that suck,
44// those that suck less, drawing some conclusions, and coming up with a few tricks
45// of my own...
46//
47// The worst red eye tools suck for two reasons:
48// -False positives
49// -Horrid red channel desaturation
50//
51// I've encountered red eye tools that claim to just work
52// by clicking a single button. Guess what, they don't. The sad thing is that
53// while you can do a pretty good job figuring out where the red eyes are
54// in a picture, the programs to provide these brain-dead interfaces usually don't
55// do anything complicated at all and gunk up non-red eye regions all over the place!
56//
57// The second problem I'd say most programs suffer from is doing a poor job of actually
58// correcting the red eye region, which is a shame but also stems from their generally
59// poor understanding of where the red eyes are.
60//
61// Algorithm:
62// ----------
63// I've developed my own red-eye reduction algorithm that tries to surpass all
64// others by:
65// -finding the red eyes and
66// -carefully fixing the color of these regions only
67//
68// The second step involving desaturing the red channel of offending pixels
69// is largely based on Gaubtz and Ulichney's 2002 IEEE paper titled:
70// "Automatic Red-Eye Detection and Correction"
71//
72// http://www.crl.hpl.hp.com/who/people/ulichney/bib/papers/2002-redeye%20-%20ICIP.pdf
73//
74// Gaubtz and Ulichney base their techinque on a complicated face-detection model.
75// I know such approaches are error prone, and guess what, we have a semi (if not
76// very) intelligent user sitting in front of the screen, why not put them to work!
77//
78// Instead of detecting face elements automatically, we first have the user select
79// a region of the image that two red eyes exist within. Before continuing, we
80// attempt to shrink this selection as much as possible by thresholding pixels and
81// tightening the boundary as long as no above threshold pixels are cut out.
82//
83// threshmet = r > 2*g AND r > MIN_RED_VAL
84//
85// Red eyes tend to be red, but not nearly as green or blue. The second
86// half of the threshold helps throw out low-lying noise by requiring
87// the red channel to be above a minimum threshold.
88//
89// Many programs JUST use the first half of this test (r > 2*g) to pick pixels
90// within a region to fix. I suppose you can get away without the noise test but
91// fudging up all these other pixels, even if it isn't very noticable, really bugs me.
92// I did extensive testing and tuned that second paramters to filter such changes out.
93//
94// Once we've shrunk the selected area, we proceed with the heart of the algorithm:
95// 1.) finding blobs
96// 2.) sorting blobs
97// 3.) picking best two blobs
98//
99// and finally...
100//
101// 4.) desaturating the best two blobs OR desaturating the entire selected
102// region if good blobs could not be found.
103//
104// Under the best conditions (most cases) the algorithm finds the offending
105// eyes and reduces them only. In the worst case scenario the algorithm
106// applying the desaturing procedure of all thresholded pixels within the
107// selected area, which is still better than other algorithms in the wild
108// since we'll employ a smarter desaturating techinque, but more on that in a bit.
109//
110// Let's examine each step in detail:
111//
112// Finding Blobs:
113// --------------
114// The finding blobs algorithm is actually pretty straight forward.
115// An initial pass over the selected region constructs a integer mask where
116// 0 indicates a pixel did not met and 1 indictes a pixel that did met the
117// same red threshold test we applied earlier.
118//
119// If the integer mask is set to 0 move on.
120// If the integer mask is set to 1 assign the next unique ID, push all 8 neighbors
121// that are 1's in the integer masl into a list and asssociate that pixel in
122// the list with the unique ID we just set.
123//
124// At the top of the loop we pop pixels off the list while the list is not empty. For each
125// pixel we check the current integer mask value it has. If it is 1 we set it to the
126// tagged unique ID and push all it's neighbors that have 1's in the integer
127// mask and move. Below is an example of what the integer mask might look like
128// before and after blobs are found.
129//
130// 0000000000000000000 0000000000000000000
131// 0011000111100000100 0022000333300000400
132// 0111100000111000110 --> 0222200000333000440
133// 0100000000110000010 0200000000330000040
134// 0000000000000000000 0000000000000000000
135//
136// Every time a new pixel is used to start a new blob the old
137// blob and a few statistics are pushed into a list. In addition to
138// knowledge of the blob ID and inherantly all tagged pixels (we keep around
139// the integer mask), we also store the pixel count and the blobs aspect ratio (w/h).
140// These stats are useful during the next step.
141//
142// Sorting Blobs:
143// --------------
144// At this point we've found all the above threshold blobs which consist of
145// connected above threshold pixels, but it is often the case not all blobs
146// are eyes. Acne, lipstick, moles, or plain old poor selection by the user, can
147// result in a number of false positive blobs getting pushed into our lists.
148// Fortunately, eyes are:
149// -round
150// -roughly the same size and shape
151//
152// To make actually picking blobs easier, we first sort the blob list by
153// decreasing size, so the biggest ones are up front. You tend to run into a lot more
154// small false positives than large ones, and the large ones tend to not be
155// very round at all (like lips), so thorwing them out is a lot easier.
156//
157// Picking Blobs:
158// --------------
159// Picking the two best blobs is fairly straight forward. If only two
160// blobs are found use those. If more blobs are found then start walking
161// down the list of blobs starting with the largest ones. The first two
162// consequtive blobls that are roughly circular (0.75 < aspect ratio < 2.0),
163// roughly similar in shape (larger aspect ratio / smaller aspect ratio < 2),
164// roughly similar in size (biggerSize / smallersize < 1.5), and both blobs
165// meet a minimum size threshold (20 pixels) are chosen as the best two blobs.
166//
167// That's all just fine and dandy, but what if two blobs can't be found that
168// meet those constraints? Easy, we'll work on the entire region, but usually we
169// find the eyes without much trouble, while throwing out the other stuff
170// like lips etc.
171//
172// Desaturing:
173// -----------
174// There are two aspects of the desaturation process that make
175// the results provided by this techinque far better than most of the
176// other programs out there.
177//
178// First, we only desaturate the red channel. A lot of programs convert
179// the pixcel color to grayscale, then dim is slightly. This is bad for two
180// reasons. First, you lose the true pupil color. Second, dimming the pixel
181// causes you to lose the glint that often reflects off the center of the
182// eyeball. Instead, we desaturate the red channel only, and instead
183// of simply decreasing it, we estimate it's true value using the green and
184// and blue components, which tends to look more natural:
185//
186// r' = 0.05*r + 0.6*g + 0.3*b
187//
188// The problem with directly desaturing the red channel is that you get seams at
189// the blob border. To prevent seams from occuring, we blend the updated
190// red channel color with the original using an alpha term based on
191// the percentage of pixels within a centered 5x5 grid that were marked as
192// blob pixels.
193//
194// The result of seamless red channel correction for the offending red eyes only.
195// The glint in a persons eyes are preserved mainly because of the blob based
196// approach we take (pixels in the center of a blob are not necessary tagged
197// since the white glint does not pass the intial threshold test).
198//
199// A final note, in the situation where two good blobs could not be found
200// we simply desaturate all pixels that meet the less stringent r > 2*g
201// test using the same r' approach techinque.
202//
203//----------------------------------------------
204
205//==============================================
206QImage* removeRedeyeRegions( QString filename,
207 QPoint topLeftExtreme, QPoint bottomRightExtreme,
208 StatusWidget* statusWidget )
209{
210 //store handle to status widget
211 status = statusWidget;
212
213 //load original image
214 rawImage = QImage( filename );
215
216 //sanity check: unable to load image
217 if(rawImage.isNull()) { return NULL; }
218
219 //convert to 32-bit depth if necessary
220 if( rawImage.depth() < 32 ) { rawImage = rawImage.convertDepth( 32, Qt::AutoColor ); }
221
222 //sanity check: make sure topLeftExtreme and bottomRightExtreme are within image boundary
223 topLeftExtreme.setX( QMAX( topLeftExtreme.x(), 0 ) );
224 topLeftExtreme.setY( QMAX( topLeftExtreme.y(), 0 ) );
225 bottomRightExtreme.setX( QMIN( bottomRightExtreme.x(), rawImage.width()-1 ) );
226 bottomRightExtreme.setY( QMIN( bottomRightExtreme.y(), rawImage.height()-1 ) );
227
228 //setup progress bar
229 QString statusMessage = qApp->translate( "removeRedeyeRegions", "Removing Red-Eye:" );
230 status->showProgressBar( statusMessage, 100 );
231 qApp->processEvents();
232
233 //update progress bar for every 1% of completion
234 updateIncrement = (int) ( 0.01 *
235 ( bottomRightExtreme.x() - topLeftExtreme.x() + 1 ) *
236 ( bottomRightExtreme.y() - topLeftExtreme.y() + 1 ) );
237 newProgress = 0;
238
239 //find region of interest: constrain search box to boundary that actually contains red enough pixels
240 findRegionOfInterest(topLeftExtreme, bottomRightExtreme);
241
242 //if no pixels were found then immediately return a NULL pointer signaling no change
243 if(topLeft.x() == -1)
244 {
245 //hide progress bar
246 status->setStatus( "" );
247 qApp->processEvents();
248
249 return NULL;
250 }
251
252 //load an editing image
253 //two images mus be loaded becuase pixel values are replaced
254 //using a compbination of niehgbors and their own in order
255 //to avoid sharp lines at the edge of the saturated region
256 editedImage = new QImage( filename );
257
258 //sanity check: unable to allocated edited image
259 if( editedImage == NULL)
260 {
261 //hide progress bar
262 status->setStatus( "" );
263 qApp->processEvents();
264
265 return NULL;
266 }
267
268 //convert to 32-bit depth if necessary
269 if( editedImage->depth() < 32 )
270 {
271 QImage* tmp = editedImage;
272 editedImage = new QImage( tmp->convertDepth( 32, Qt::AutoColor ) );
273 delete tmp; tmp=NULL;
274 }
275
276 findBlobs();
279
280 //if we found two good blobs then desaturate those only
281 if(id1 != -1)
282 {
284 }
285 //else desaturate all pixels above thresh within selection area
286 else
287 {
288 desaturateEntireImage(topLeftExtreme, bottomRightExtreme);
289 }
290
291 //remove status bar
292 status->setStatus( "" );
293 qApp->processEvents();
294
295 //return pointer to edited image
296 return editedImage;
297}
298//==============================================
299
300// 40 = 15.6% of red channel, a good heuristic for false positives
301//at border of face on a dark background.
302#define MIN_RED_VAL 40
303
304//==============================================
305void findRegionOfInterest(QPoint topLeftExtreme, QPoint bottomRightExtreme)
306{
307 topLeft = QPoint(-1,-1);
308 bottomRight = QPoint(-1,-1);
309
310 int x, y;
311 QRgb* rgb;
312 uchar* scanLine;
313 for( y=topLeftExtreme.y(); y<=bottomRightExtreme.y(); y++)
314 {
315 scanLine = rawImage.scanLine(y);
316 for( x=topLeftExtreme.x(); x<=bottomRightExtreme.x(); x++)
317 {
318 rgb = ((QRgb*)scanLine+x);
319
320 bool threshMet = qRed(*rgb) > 2*qGreen(*rgb) &&
321 qRed(*rgb) > MIN_RED_VAL;
322 if(threshMet)
323 {
324 //first pixel
325 if(topLeft.x() == -1)
326 {
327 topLeft = QPoint(x,y);
328 bottomRight = QPoint(x,y);
329 }
330
331 if(x < topLeft.x() ) topLeft.setX( x );
332 if(y < topLeft.y() ) topLeft.setY( y );
333 if(x > bottomRight.x() ) bottomRight.setX( x );
334 if(y > bottomRight.y() ) bottomRight.setY( y );
335 }
336
337 //update status bar if significant progress has been made since last update
338 newProgress++;
340 {
341 newProgress = 0;
343 qApp->processEvents();
344 }
345
346 }
347 }
348}
349//==============================================
350void pushPixel(int x, int y, int id)
351{
352 //if pixel off image or below thresh ignore push attempt
353 if( x < 0 ||
354 y < 0 ||
355 x >= regionWidth ||
356 y >= regionHeight ||
357 regionOfInterest[ x + y*regionWidth ] != 1 )
358 return;
359
360 //passes! set id and actually put pixel onto stack
361 regionOfInterest[ x + y*regionWidth] = id;
362 spreadablePixels.push( QPoint( x, y ) );
363
364 //increase blob pixel count and update topLeft and bottomRight
366 blobTopLeft.setX( QMIN( x, blobTopLeft.x() ) );
367 blobTopLeft.setY( QMIN( y, blobTopLeft.y() ) );
368 blobBottomRight.setX( QMAX( x, blobBottomRight.x() ) );
369 blobBottomRight.setY( QMAX( y, blobBottomRight.y() ) );
370}
371//==============================================
373{
374 //create small matrix for region of interest
375 regionWidth = bottomRight.x() - topLeft.x() + 1;
376 regionHeight = bottomRight.y() - topLeft.y() + 1;
378
379 //set all pixels that meet thresh to 1, all others to 0
380 int x, y;
381 int x2, y2;
382 QRgb* rgb;
383 uchar* scanLine;
384 for( y=topLeft.y(); y<=bottomRight.y(); y++)
385 {
386 y2 = y - topLeft.y();
387
388 scanLine = rawImage.scanLine(y);
389 for( x=topLeft.x(); x<=bottomRight.x(); x++)
390 {
391
392 x2 = x - topLeft.x();
393
394 rgb = ((QRgb*)scanLine+x);
395
396 bool threshMet = qRed(*rgb) > 2*qGreen(*rgb) &&
397 qRed(*rgb) > MIN_RED_VAL;
398
399 if(threshMet)
400 regionOfInterest[ x2 + y2*regionWidth ] = 1;
401 else
402 regionOfInterest[ x2 + y2*regionWidth ] = 0;
403 }
404 }
405
406 //walk over region of interest and propogate blobs
407 int nextValidID = 2;
408 for(x = 0; x<regionWidth; x++)
409 {
410 for(y = 0; y<regionHeight; y++)
411 {
412 //if any blobs can be propogated handle them first
413 while( !spreadablePixels.empty() )
414 {
415 QPoint point = spreadablePixels.pop();
416 int id = regionOfInterest[ point.x() + point.y()*regionWidth ];
417
418 pushPixel( point.x()-1, point.y()-1, id );
419 pushPixel( point.x(), point.y()-1, id );
420 pushPixel( point.x()+1, point.y()-1, id );
421 pushPixel( point.x()-1, point.y(), id );
422 pushPixel( point.x()+1, point.y(), id );
423 pushPixel( point.x()-1, point.y()+1, id );
424 pushPixel( point.x(), point.y()+1, id );
425 pushPixel( point.x()+1, point.y()+1, id );
426 }
427
428 //if this pixel has met thresh and has not yet been assigned a unique ID,
429 //assign it the next unique id and push all valid neighbors
430 if( regionOfInterest[ x + y*regionWidth ] == 1 )
431 {
432 //print last blob stats
433 if( nextValidID > 2)
434 {
435 blobIDs.push( (nextValidID - 1) );
437 blobAspectRatios.push( ((double)(blobBottomRight.x() - blobTopLeft.x()+1)) /
438 (blobBottomRight.y() - blobTopLeft.y()+1) );
439 }
440
441 regionOfInterest[x + y*regionWidth] = nextValidID;
442 pushPixel( x-1, y-1, nextValidID );
443 pushPixel( x, y-1, nextValidID );
444 pushPixel( x+1, y-1, nextValidID );
445 pushPixel( x-1, y, nextValidID );
446 pushPixel( x+1, y, nextValidID );
447 pushPixel( x-1, y+1, nextValidID );
448 pushPixel( x, y+1, nextValidID );
449 pushPixel( x+1, y+1, nextValidID );
450 nextValidID++;
451
452 blobPixelCount = 1;
453 blobTopLeft = QPoint( x, y );
454 blobBottomRight = QPoint( x, y );
455 }
456 } //y
457 } //x
458
459 //insert last blob stats
460 if( nextValidID > 2)
461 {
462 blobIDs.push( (nextValidID - 1) );
464 blobAspectRatios.push( ((double)(blobBottomRight.x() - blobTopLeft.x()+1)) / (blobBottomRight.y() - blobTopLeft.y()+1) );
465 }
466}
467//==============================================
469{
470 blobCount = blobIDs.count();
471 ids = new int[blobCount];
472 sizes = new int[blobCount];
473 ratios = new double[blobCount];
474
475 int i,j;
476 for(i=0; i<blobCount; i++)
477 {
478 ids[i] = blobIDs.pop();
479 sizes[i] = blobSizes.pop();
480 ratios[i] = blobAspectRatios.pop();
481 }
482
483 //quick and dirty bubble sort
484 for(j = blobCount-1; j>0; j--)
485 {
486 for(i=0; i<j; i++)
487 {
488 if( sizes[i+1] > sizes[i] )
489 {
490 int t = sizes[i+1];
491 sizes[i+1] = sizes[i];
492 sizes[i] = t;
493
494 t = ids[i+1];
495 ids[i+1] = ids[i];
496 ids[i] = t;
497
498 double tR = ratios[i+1];
499 ratios[i+1] = ratios[i];
500 ratios[i] = tR;
501 }
502 }
503 }
504}
505//==============================================
507{
508 id1 = -1;
509 id2 = -1;
510 int i;
511
512 //special case: 2 blobs found, both larger than 1 pixel
513 if(blobCount == 2 &&
514 sizes[0] > 1 &&
515 sizes[1] > 1)
516 {
517 id1 = ids[0];
518 id2 = ids[1];
519 }
520 else
521 {
522 for(i=0; i<blobCount-2; i++)
523 {
524 //once we hit blobs that are only one pixel large stop because they are probably just noise
525 if( sizes[i+1] <= 1 ) break;
526
527 double as1 = ratios[i];
528 double as2 = ratios[i+1];
529
530 if(as1 < 1) as1 = 1.0/as1;
531 if(as2 < 1) as2 = 1.0/as2;
532
533 if( //both blobs must be semi-circular, prefer those that are wider
534 ratios[i] > 0.75 && ratios[i] < 2 &&
535 ratios[i+1] > 0.75 && ratios[i+1] < 2 &&
536 //both blobs must be similar in shape
537 QMAX(as2,as1)/QMIN(as2,as1) < 2 &&
538 //both blobs must be similar in size
539 ((double)QMAX( sizes[i], sizes[i+1] )) / QMIN( sizes[i], sizes[i+1] ) < 1.5 &&
540 //both blobs must be above a certain thresh size, this prevents selecting blobs that are very very tiny
541 //if only tiny blobs are around we'll end up desaturating entire region
542 QMAX( sizes[i], sizes[i+1] ) > 20 )
543 {
544 id1 = ids[i];
545 id2 = ids[i+1];
546 break;
547 }
548 }
549 }
550
551 //Comment this sectionin to see what blobs were found and selected
552/* cout << "-----\n";
553 for(i=0; i<blobCount-1; i++)
554 {
555 if( ids[i] == id1 || ids[i] == id2 )
556 cout << "--->";
557 cout << "ID: " << ids[i] << "count: " << sizes[i] << " w:h: " << ratios[i] << "\n";
558 }*/
559}
560//==============================================
561bool IDedPixel( int x, int y)
562{
563 if( x < topLeft.x() || y < topLeft.y() ||
564 x > bottomRight.x() || y > bottomRight.y() )
565 return false;
566
567 int regionIndex = x - topLeft.x() + (y-topLeft.y())*regionWidth;
568 return ( regionOfInterest[regionIndex] == id1 ||
570}
571//==============================================
572double desaturateAlpha(int x, int y)
573{
574 int n = 0;
575 if( IDedPixel(x ,y ) ) n++;
576
577 if(n == 1)
578 return 1.0;
579
580 if( IDedPixel(x-1,y-1) ) n++;
581 if( IDedPixel(x ,y-1) ) n++;
582 if( IDedPixel(x+1,y-1) ) n++;
583 if( IDedPixel(x-1,y ) ) n++;
584 if( IDedPixel(x+1,y ) ) n++;
585 if( IDedPixel(x-1,y+1) ) n++;
586 if( IDedPixel(x ,y+1) ) n++;
587 if( IDedPixel(x+1,y+1) ) n++;
588
589 if( IDedPixel(x-2,y-2) ) n++;
590 if( IDedPixel(x-1,y-2) ) n++;
591 if( IDedPixel(x ,y-2) ) n++;
592 if( IDedPixel(x+1,y-2) ) n++;
593 if( IDedPixel(x+2,y-2) ) n++;
594
595 if( IDedPixel(x-2,y-1) ) n++;
596 if( IDedPixel(x+2,y-1) ) n++;
597 if( IDedPixel(x-2,y ) ) n++;
598 if( IDedPixel(x+2,y ) ) n++;
599 if( IDedPixel(x-2,y+1) ) n++;
600 if( IDedPixel(x+2,y+1) ) n++;
601
602 if( IDedPixel(x-2,y+2) ) n++;
603 if( IDedPixel(x-1,y+2) ) n++;
604 if( IDedPixel(x ,y+2) ) n++;
605 if( IDedPixel(x+1,y+2) ) n++;
606 if( IDedPixel(x+2,y+2) ) n++;
607
608
609 return ((double)n) / 25;
610}
611//==============================================
613{
614 //desaturate bad pixels
615 int x, y;
616 double r;
617 QRgb* rgb;
618 uchar* scanLine;
619 for( y = QMAX( topLeft.y()-1, 0);
620 y<= QMIN( bottomRight.y()+1, editedImage->height()-1 );
621 y++)
622 {
623 scanLine = editedImage->scanLine(y);
624 for( x = QMAX( topLeft.x()-1, 0);
625 x <= QMIN( bottomRight.x()+1, editedImage->width()-1 );
626 x++)
627 {
628 double alpha = desaturateAlpha( x, y );
629 if( alpha > 0)
630 {
631 rgb = ((QRgb*)scanLine+x);
632
633 r = alpha*(0.05*qRed(*rgb) + 0.6*qGreen(*rgb) + 0.3*qBlue(*rgb)) +
634 (1-alpha)*qRed(*rgb);
635 *rgb = qRgb( (int)r,
636 qGreen(*rgb),
637 qBlue(*rgb) );
638 } //alpha > 0
639 } //x
640 } //y
641}
642//==============================================
643void desaturateEntireImage(QPoint topLeftExtreme, QPoint bottomRightExtreme)
644{
645 //desaturate bad pixels
646 int x, y;
647 QRgb* rgb;
648 uchar* scanLine;
649 for( y=topLeftExtreme.y(); y<=bottomRightExtreme.y(); y++)
650 {
651 scanLine = editedImage->scanLine(y);
652 for( x=topLeftExtreme.x(); x<=bottomRightExtreme.x(); x++)
653 {
654 rgb = ((QRgb*)scanLine+x);
655 if( qRed(*rgb) > 2*qGreen(*rgb) )
656 {
657 *rgb = qRgb( (int) (0.05*qRed(*rgb) + 0.6*qGreen(*rgb) + 0.3*qBlue(*rgb)),
658 qGreen(*rgb),
659 qBlue(*rgb) );
660 } // > thresh
661 } //x
662 } //y
663}
664//==============================================
665
666
667
668
669
int regionIndex(int x, int y)
Definition blur.cpp:227
void setStatus(QString message)
Update message.
void showProgressBar(QString message, int numSteps)
Initializes the progress bar.
void incrementProgress()
Updates the progress bar by one step.
void sortBlobsByDecreasingSize()
Definition redEye.cpp:468
void pushPixel(int x, int y, int id)
Definition redEye.cpp:350
void desaturateBlobs()
Definition redEye.cpp:612
double desaturateAlpha(int x, int y)
Definition redEye.cpp:572
void findBestTwoBlobs()
Definition redEye.cpp:506
QImage * removeRedeyeRegions(QString filename, QPoint topLeftExtreme, QPoint bottomRightExtreme, StatusWidget *statusWidget)
Definition redEye.cpp:206
bool IDedPixel(int x, int y)
Definition redEye.cpp:561
void findRegionOfInterest(QPoint topLeftExtreme, QPoint bottomRightExtreme)
Definition redEye.cpp:305
#define MIN_RED_VAL
Definition redEye.cpp:302
void findBlobs()
Definition redEye.cpp:372
void desaturateEntireImage(QPoint topLeftExtreme, QPoint bottomRightExtreme)
Definition redEye.cpp:643
int id1
Q3ValueStack< double > blobAspectRatios
int regionWidth
int updateIncrement
QImage rawImage
int * ids
QImage * editedImage
StatusWidget * status
int * regionOfInterest
QPoint topLeft
double * ratios
Q3ValueStack< int > blobSizes
QPoint bottomRight
QPoint blobTopLeft
int blobPixelCount
int blobCount
Q3ValueStack< int > blobIDs
int id2
int newProgress
QPoint blobBottomRight
int regionHeight
Q3ValueStack< QPoint > spreadablePixels
int * sizes