In the past few years, the first question has been addressed by the introduction of metrics that mathematically formalize our intuitive notions of ‘good’ segmentation. Given a space of segmentation algorithms, how can a computer be used to search for a good algorithm? Recently there has been progress in answering two basic questions about image segmentation. Engineers have tried to make computers perform all of these tasks, but computers still make many more errors than humans. A biologist may want to find the cells in a fluorescence image from a microscope. A radiologist may need the shapes and sizes of organs in an MRI or CT scan. For example, a digital camera user might like to segment an image of a room into people, pieces of furniture, and other household objects. One of these problems is image segmentation, the partitioning of an image into sets of pixels (segments) corresponding to distinct objects. This is easier said than done - it involves fundamental problems that have eluded solution by researchers in artificial intelligence for half a century. Ideally, computers would be made smart enough to analyze images with little or no human assistance. For some experiments, the greatest barrier is no longer acquiring the images, but rather the labor required to analyze them. With today’s automated imaging systems, it is common to generate and archive torrents of data. But much of the excitement over imaging overlooks an important technological gap: scientists not only need machines for making images, but also machines for seeing them. Today’s golden age of fluorescent probes has renewed the belief that innovations in microscopy lead to new discoveries. Imaging technologies have influenced biology and neuro-science profoundly, starting from the cell theory and the neuron doctrine.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |