-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathreferences-2-la.bib
2051 lines (1842 loc) · 128 KB
/
references-2-la.bib
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
@article{swiderska-chadaj_learnicytes_2019,
title = {Learnicytes in immunohistochemistry with deep learning},
volume = {58},
issn = {1361-8415},
url = {http://www.sciencedirect.com/science/article/pii/S1361841519300829},
doi = {https://doi.org/10.1016/j.media.2019.101547},
abstract = {The immune system is of critical importance in the development of cancer. The evasion of destruction by the immune system is one of the emerging hallmarks of cancer. We have built a dataset of 171,166 manually annotated {CD}3+ and {CD}8+ cells, which we used to train deep learning algorithms for automatic detection of lymphocytes in histopathology images to better quantify immune response. Moreover, we investigate the effectiveness of four deep learning based methods when different subcompartments of the whole-slide image are considered: normal tissue areas, areas with immune cell clusters, and areas containing artifacts. We have compared the proposed methods in breast, colon and prostate cancer tissue slides collected from nine different medical centers. Finally, we report the results of an observer study on lymphocyte quantification, which involved four pathologists from different medical centers, and compare their performance with the automatic detection. The results give insights on the applicability of the proposed methods for clinical use. U-Net obtained the highest performance with an F1-score of 0.78 and the highest agreement with manual evaluation (κ=0.72), whereas the average pathologists agreement with reference standard was κ=0.64. The test set and the automatic evaluation procedure are publicly available at lyon19.grand-challenge.org.},
pages = {101547},
journaltitle = {Medical Image Analysis},
author = {Swiderska-Chadaj, Zaneta and Pinckaers, Hans and Rijthoven, Mart van and Balkenhol, Maschenka and Melnikova, Margarita and Geessink, Oscar and Manson, Quirine and Sherman, Mark and Polonia, Antonio and Parry, Jeremy and Abubakar, Mustapha and Litjens, Geert and Laak, Jeroen van der and Ciompi, Francesco},
date = {2019},
keywords = {Computational pathology, Deep learning, Immune cell detection, Immunohistochemistry}
}
@report{marlow_haskell_2010,
title = {Haskell 2010 Language Report},
author = {Marlow, Simon},
date = {2010}
}
@article{henschel_fastsurfer_2019,
title = {{FastSurfer} – A fast and accurate deep learning based neuroimaging pipeline},
author = {Henschel, Leonie and Conjeti, Sailesh and Estrada, Santiago and Diers, Kersten and Fischl, Bruce and Reuter, Martin},
date = {2019},
note = {\_eprint: 1910.03866}
}
@article{girshick_fast_2015,
title = {Fast R-{CNN}},
author = {Girshick, Ross},
date = {2015},
note = {\_eprint: 1504.08083}
}
@article{he_delving_2015,
title = {Delving Deep into Rectifiers: Surpassing Human-Level Performance on {ImageNet} Classification},
author = {He, Kaiming and Zhang, Xiangyu and Ren, Shaoqing and Sun, Jian},
date = {2015},
note = {\_eprint: 1502.01852}
}
@article{beers_deepneuro_2018,
title = {{DeepNeuro}: an open-source deep learning toolbox for neuroimaging},
author = {Beers, Andrew and Brown, James and Chang, Ken and Hoebel, Katharina and Gerstner, Elizabeth and Rosen, Bruce and Kalpathy-Cramer, Jayashree},
date = {2018},
note = {\_eprint: 1808.04589}
}
@article{zhu_deeplung_2018,
title = {{DeepLung}: Deep 3D Dual Path Nets for Automated Pulmonary Nodule Detection and Classification},
author = {Zhu, Wentao and Liu, Chaochun and Fan, Wei and Xie, Xiaohui},
date = {2018},
note = {\_eprint: 1801.09555}
}
@article{devlin_bert_2018,
title = {{BERT}: Pre-training of Deep Bidirectional Transformers for Language Understanding},
author = {Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
date = {2018},
note = {\_eprint: 1810.04805}
}
@article{he_automl_2019,
title = {{AutoML}: A Survey of the State-of-the-Art},
author = {He, Xin and Zhao, Kaiyong and Chu, Xiaowen},
date = {2019},
note = {\_eprint: 1908.00709}
}
@inproceedings{hutter_automl_2016,
title = {{AutoML} 2016 Workshop Proceedings: Proceedings of the Workshop on Automatic Machine Learning, 24 June 2016, New York, New York, {USA}},
volume = {64},
series = {Proceedings of Machine Learning Research},
publisher = {Proceedings of Machine Learning Research},
author = {Hutter, F. and Kotthoff, L. and Vanschoren, J.},
date = {2016}
}
@article{chen_end--end_2018,
title = {An End-to-end Approach to Semantic Segmentation with 3D {CNN} and Posterior-{CRF} in Medical Images},
author = {Chen, Shuai and Bruijne, Marleen de},
date = {2018},
note = {\_eprint: 1811.03549}
}
@article{zhang_computer_2017,
title = {A Computer Vision Pipeline for Automated Determination of Cardiac Structure and Function and Detection of Disease by Two-Dimensional Echocardiography},
author = {Zhang, Jeffrey and Gajjala, Sravani and Agrawal, Pulkit and Tison, Geoffrey H. and Hallock, Laura A. and Beussink-Nelson, Lauren and Fan, Eugene and Aras, Mandar A. and Jordan, {ChaRandle} and Fleischmann, Kirsten E. and Melisko, Michelle and Qasim, Atif and Efros, Alexei and Shah, Sanjiv J. and Bajcsy, Ruzena and Deo, Rahul C.},
date = {2017},
note = {\_eprint: 1706.07342}
}
@inproceedings{khvostikov_3d_2018,
title = {3D {CNN}-based classification using {sMRI} and {MD}-{DTI} images for Alzheimer disease studies},
author = {Khvostikov, Alexander and Aderghal, Karim and Benois-Pineau, Jenny and Krylov, Andrey and Catheline, Gwenaelle},
date = {2018},
note = {\_eprint: 1801.05968}
}
@article{hohman_visual_2019,
title = {Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers},
volume = {25},
doi = {10.1109/TVCG.2018.2843369},
pages = {2674--2693},
number = {8},
journaltitle = {{IEEE} Transactions on Visualization and Computer Graphics},
author = {Hohman, F. and Kahng, M. and Pienta, R. and Chau, D. H.},
date = {2019},
keywords = {Computational modeling, Conferences, Data visualization, Deep learning, Machine learning, Neural networks, Visual analytics, information visualization, neural networks, visual analytics}
}
@inproceedings{mendoza_towards_2016,
title = {Towards Automatically-Tuned Neural Networks},
booktitle = {{AutoML}@{ICML}},
author = {Mendoza, Hector and Klein, Aaron and Feurer, Matthias and Springenberg, Jost Tobias and Hutter, Frank},
date = {2016}
}
@article{zela_towards_2018,
title = {Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search},
volume = {abs/1807.06906},
journaltitle = {{ArXiv}},
author = {Zela, Arber and Klein, Aaron and Falkner, Stefan and Hutter, Frank},
date = {2018}
}
@article{jamaludin_spinenet_2017,
title = {{SpineNet}: Automated classification and evidence visualization in spinal {MRIs}},
volume = {41},
issn = {1361-8415},
url = {http://www.sciencedirect.com/science/article/pii/S136184151730110X},
doi = {https://doi.org/10.1016/j.media.2017.07.002},
abstract = {The objective of this work is to automatically produce radiological gradings of spinal lumbar {MRIs} and also localize the predicted pathologies. We show that this can be achieved via a Convolutional Neural Network ({CNN}) framework that takes intervertebral disc volumes as inputs and is trained only on disc-specific class labels. Our contributions are: (i) a {CNN} architecture that predicts multiple gradings at once, and we propose variants of the architecture including using 3D convolutions; (ii) showing that this architecture can be trained using a multi-task loss function without requiring segmentation level annotation; and (iii) a localization method that clearly shows pathological regions in the disc volumes. We compare three visualization methods for the localization. The network is applied to a large corpus of {MRI} T2 sagittal spinal {MRIs} (using a standard clinical scan protocol) acquired from multiple machines, and is used to automatically compute disk and vertebra gradings for each {MRI}. These are: Pfirrmann grading, disc narrowing, upper/lower endplate defects, upper/lower marrow changes, spondylolisthesis, and central canal stenosis. We report near human performances across the eight gradings, and also visualize the evidence for these gradings localized on the original scans.},
pages = {63 -- 73},
journaltitle = {Medical Image Analysis},
author = {Jamaludin, Amir and Kadir, Timor and Zisserman, Andrew},
date = {2017},
keywords = {{MRI} analysis, Radiological classification, Spinal {MRI}}
}
@article{hutter_sequential_2011,
title = {Sequential model-based optimization for general algorithm configuration},
pages = {507--523},
author = {Hutter, Frank and Hoos, Holger H and Leytonbrown, Kevin},
date = {2011}
}
@article{brock_smash_2017,
title = {{SMASH}: One-Shot Model Architecture Search through {HyperNetworks}},
volume = {abs/1708.05344},
journaltitle = {{ArXiv}},
author = {Brock, Andrew and Lim, Theodore and Ritchie, James M. and Weston, Nick},
date = {2017}
}
@inproceedings{real_regularized_2018,
title = {Regularized Evolution for Image Classifier Architecture Search},
booktitle = {{AAAI}},
author = {Real, Esteban and Aggarwal, Alok and Huang, Yanping and Le, Quoc V.},
date = {2018}
}
@article{ruifrok_quantification_2001,
title = {Quantification of histochemical staining by color deconvolution},
volume = {23},
pages = {291--299},
number = {4},
journaltitle = {Analytical and Quantitative Cytology and Histology},
author = {Ruifrok, Arnout C C and Johnston, Dennis A},
date = {2001}
}
@article{bergstra_random_2012,
title = {Random search for hyper-parameter optimization},
volume = {13},
pages = {281--305},
number = {1},
journaltitle = {Journal of Machine Learning Research},
author = {Bergstra, James and Bengio, Yoshua},
date = {2012}
}
@article{liu_progressive_2017,
title = {Progressive Neural Architecture Search},
volume = {abs/1712.00559},
journaltitle = {{ArXiv}},
author = {Liu, Chenxi and Zoph, Barret and Neumann, Maxim and Shlens, Jonathon and Hua, Wei and Li, Li-Jia and Fei-Fei, Li and Yuille, Alan and Huang, Jonathan and Murphy, Kevin L.},
date = {2017}
}
@article{snoek_practical_2012,
title = {Practical Bayesian Optimization of Machine Learning Algorithms},
pages = {2951--2959},
author = {Snoek, Jasper and Larochelle, Hugo and Adams, Ryan P},
date = {2012}
}
@article{zhong_practical_2017,
title = {Practical Block-Wise Neural Network Architecture Generation},
pages = {2423--2432},
journaltitle = {2018 {IEEE}/{CVF} Conference on Computer Vision and Pattern Recognition},
author = {Zhong, Zhao and Yan, Junjie and Wu, Wei and Shao, Jing and Liu, Cheng-Lin},
date = {2017}
}
@article{wilhelms_octrees_1992,
title = {Octrees for Faster Isosurface Generation},
volume = {11},
issn = {0730-0301},
url = {http://doi.acm.org/10.1145/130881.130882},
doi = {10.1145/130881.130882},
pages = {201--227},
number = {3},
journaltitle = {{ACM} Trans. Graph.},
author = {Wilhelms, Jane and Van Gelder, Allen},
date = {1992},
note = {Place: New York, {NY}, {USA}
Publisher: {ACM}},
keywords = {hierarchical spatial enumeration, isosurface extraction, octree, scientific visualization}
}
@inproceedings{wei_network_2016,
title = {Network Morphism},
booktitle = {{ICML}},
author = {Wei, Tao and Wang, Changhu and Rui, Yong and Chen, Chang Wen},
date = {2016}
}
@article{zoph_neural_2016,
title = {Neural Architecture Search with Reinforcement Learning},
volume = {abs/1611.01578},
journaltitle = {{ArXiv}},
author = {Zoph, Barret and Le, Quoc V.},
date = {2016}
}
@article{chen_net2net_2015,
title = {Net2Net: Accelerating Learning via Knowledge Transfer},
volume = {abs/1511.05641},
journaltitle = {{CoRR}},
author = {Chen, Tianqi and Goodfellow, Ian J. and Shlens, Jonathon},
date = {2015}
}
@article{zoph_learning_2017,
title = {Learning Transferable Architectures for Scalable Image Recognition},
pages = {8697--8710},
journaltitle = {2018 {IEEE}/{CVF} Conference on Computer Vision and Pattern Recognition},
author = {Zoph, Barret and Vasudevan, Vijay and Shlens, Jonathon and Le, Quoc V.},
date = {2017}
}
@inproceedings{real_large-scale_2017,
title = {Large-Scale Evolution of Image Classifiers},
booktitle = {{ICML}},
author = {Real, Esteban and Moore, Sherry and Selle, Andrew and Saxena, Saurabh and Suematsu, Yutaka Leon and Tan, Jie and Le, Quoc V. and Kurakin, Alexey},
date = {2017}
}
@article{armeni_joint_2017,
title = {Joint 2D-3D-Semantic Data for Indoor Scene Understanding},
volume = {abs/1702.01105},
url = {http://arxiv.org/abs/1702.01105},
journaltitle = {{CoRR}},
author = {Armeni, Iro and Sax, Sasha and Zamir, Amir Roshan and Savarese, Silvio},
date = {2017},
note = {\_eprint: 1702.01105},
keywords = {Computer Science - Computer Vision and Pattern Recognition, Computer Science - Robotics}
}
@inproceedings{silberman_indoor_2012,
location = {Berlin, Heidelberg},
title = {Indoor Segmentation and Support Inference from {RGBD} Images},
isbn = {978-3-642-33714-7},
url = {http://dx.doi.org/10.1007/978-3-642-33715-4_54},
doi = {10.1007/978-3-642-33715-4_54},
series = {{ECCV}'12},
pages = {746--760},
booktitle = {Proceedings of the 12th European Conference on Computer Vision - Volume Part V},
publisher = {Springer-Verlag},
author = {Silberman, Nathan and Hoiem, Derek and Kohli, Pushmeet and Fergus, Rob},
date = {2012},
note = {event-place: Florence, Italy}
}
@article{ren_faster_2015,
title = {Faster R-{CNN}: Towards Real-Time Object Detection with Region Proposal Networks},
volume = {abs/1506.01497},
url = {http://arxiv.org/abs/1506.01497},
journaltitle = {{CoRR}},
author = {Ren, Shaoqing and He, Kaiming and Girshick, Ross B. and Sun, Jian},
date = {2015},
note = {\_eprint: 1506.01497}
}
@article{liu_hierarchical_2017,
title = {Hierarchical Representations for Efficient Architecture Search},
volume = {abs/1711.00436},
journaltitle = {{ArXiv}},
author = {Liu, Hanxiao and Simonyan, Karen and Vinyals, Oriol and Fernando, Chrisantha and Kavukcuoglu, Koray},
date = {2017}
}
@article{klein_fast_2016,
title = {Fast Bayesian Optimization of Machine Learning Hyperparameters on Large Datasets},
volume = {abs/1605.07079},
journaltitle = {{ArXiv}},
author = {Klein, Aaron and Falkner, Stefan and Bartels, Simon and Hennig, Philipp and Hutter, Frank},
date = {2016}
}
@article{stanley_evolving_2001,
title = {Evolving Neural Networks through Augmenting Topologies},
volume = {10},
pages = {99--127},
journaltitle = {Evolutionary Computation},
author = {Stanley, Kenneth O. and Miikkulainen, Risto},
date = {2001}
}
@article{klokov_escape_2017,
title = {Escape from Cells: Deep Kd-Networks for The Recognition of 3D Point Cloud Models},
volume = {abs/1704.01222},
url = {http://arxiv.org/abs/1704.01222},
journaltitle = {{CoRR}},
author = {Klokov, Roman and Lempitsky, Victor S.},
date = {2017},
note = {\_eprint: 1704.01222}
}
@article{pham_efficient_2018,
title = {Efficient Neural Architecture Search via Parameter Sharing},
volume = {abs/1802.03268},
journaltitle = {{ArXiv}},
author = {Pham, Hieu and Guan, Melody Y. and Zoph, Barret and Le, Quoc V. and Dean, Jeff},
date = {2018}
}
@inproceedings{cai_efficient_2017,
title = {Efficient Architecture Search by Network Transformation},
booktitle = {{AAAI}},
author = {Cai, Han and Chen, Tianyao and Zhang, Weinan and Yu, Yong and Wang, Jun},
date = {2017}
}
@article{baker_designing_2016,
title = {Designing Neural Network Architectures using Reinforcement Learning},
volume = {abs/1611.02167},
journaltitle = {{ArXiv}},
author = {Baker, Bowen and Gupta, Otkrist and Naik, Nikhil and Raskar, Ramesh},
date = {2016}
}
@article{liu_darts_2018,
title = {{DARTS}: Differentiable Architecture Search},
volume = {abs/1806.09055},
journaltitle = {{ArXiv}},
author = {Liu, Hanxiao and Simonyan, Karen and Yang, Yiming},
date = {2018}
}
@inproceedings{jin_auto-keras_2018,
title = {Auto-Keras: An Efficient Neural Architecture Search System},
booktitle = {{KDD}},
author = {Jin, Haifeng and Song, Qingquan and Hu, Xia},
date = {2018}
}
@article{mcculloch_logical_1943,
title = {A logical calculus of the ideas immanent in nervous activity},
volume = {5},
issn = {1522-9602},
url = {https://doi.org/10.1007/BF02478259},
doi = {10.1007/BF02478259},
abstract = {Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.},
pages = {115--133},
number = {4},
journaltitle = {The bulletin of mathematical biophysics},
author = {{McCulloch}, Warren S. and Pitts, Walter},
date = {1943}
}
@article{esteves_3d_2017,
title = {3D object classification and retrieval with Spherical {CNNs}},
volume = {abs/1711.06721},
url = {http://arxiv.org/abs/1711.06721},
journaltitle = {{CoRR}},
author = {Esteves, Carlos and Allen-Blanchette, Christine and Makadia, Ameesh and Daniilidis, Kostas},
date = {2017},
note = {\_eprint: 1711.06721}
}
@book{zhang_visual_2018,
title = {Visual Interpretability for Deep Learning: a Survey},
author = {Zhang, Quanshi and Zhu, Song-Chun},
date = {2018},
note = {\_eprint: 1802.00614}
}
@article{yong_survey_2012,
title = {A Survey of Visualization Tools in Medical Imaging},
volume = {56},
issn = {1877-0428},
url = {http://www.sciencedirect.com/science/article/pii/S187704281204116X},
doi = {https://doi.org/10.1016/j.sbspro.2012.09.654},
abstract = {More than 30 students from university campus participated in the Development of Biomedical Image Processing Software Package for New Learners Survey investigating the use of software package for processing and editing image. The survey was available online for six months. Facts and opinions were sought to learn the general information, interactive image processing tool, non-interactive (automatic) tool, current status and future of image processing package tool. Composed of 19 questions, the survey built a comprehensive picture of the software package, programming language, workflow of the tool and captured the attitudes of the respondents. Result shows that {MATLAB} was difficult to use but it was viewed in high regard however. The result of this study is expected to be beneficial and able to assist users on effective image processing and analysis in a newly developed software package.},
pages = {265 -- 271},
journaltitle = {Procedia - Social and Behavioral Sciences},
author = {Yong, Ching Yee and Chew, Kim Mey and Mahmood, Nasrul Humaimi and Ariffin, Ismail},
date = {2012},
keywords = {Image editting, Image processing, Medical imaging, Software package, Visualisation tools}
}
@book{muller_miscnn_2019,
title = {{MIScnn}: A Framework for Medical Image Segmentation with Convolutional Neural Networks and Deep Learning},
author = {Müller, Dominik and Kramer, Frank},
date = {2019},
note = {\_eprint: 1910.09308}
}
@article{chetlur_cudnn_2014,
title = {{cuDNN}: Efficient Primitives for Deep Learning},
journaltitle = {{arXiv}: Neural and Evolutionary Computing},
author = {Chetlur, Sharan and Woolley, Cliff and Vandermersch, Philippe and Cohen, Jonathan D and Tran, John and Catanzaro, Bryan and Shelhamer, Evan},
date = {2014}
}
@article{maloney_scratch_2010,
title = {The Scratch Programming Language and Environment},
volume = {10},
doi = {10.1145/1868358.1868363},
pages = {16},
journaltitle = {{ACM} Transactions on Computing Education ({TOCE})},
author = {Maloney, John and Resnick, Mitchel and Rusk, Natalie and Silverman, Brian and Eastmond, Evelyn},
date = {2010}
}
@article{fischl_freesurfer_2012,
title = {{FreeSurfer}},
volume = {62},
issn = {1095-9572 (Electronic) 1053-8119 (Linking)},
url = {http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3685476/},
doi = {10.1016/j.neuroimage.2012.01.021},
abstract = {{FreeSurfer} is a suite of tools for the analysis of neuroimaging data that provides an array of algorithms to quantify the functional, connectional and structural properties of the human brain. It has evolved from a package primarily aimed at generating surface representations of the cerebral cortex into one that automatically creates models of most macroscopically visible structures in the human brain given any reasonable T1-weighted input image. It is freely available, runs on a wide variety of hardware and software platforms, and is open source.},
pages = {774--81},
number = {2},
journaltitle = {Neuroimage},
author = {Fischl, B.},
date = {2012},
keywords = {*Algorithms, 20th Century, 21st Century, Brain Mapping/*history/methods, Brain/anatomy \& histology, Computer-Assisted/*history/methods, History, Humans, Image Processing, Magnetic Resonance Imaging/*history/methods, Software/*history}
}
@article{walt_numpy_2011,
title = {The {NumPy} Array: A Structure for Efficient Numerical Computation},
volume = {13},
doi = {10.1109/MCSE.2011.37},
pages = {22--30},
number = {2},
journaltitle = {Computing in Science Engineering},
author = {Walt, S. van der and Colbert, S. C. and Varoquaux, G.},
date = {2011-03},
keywords = {Arrays, Computational efficiency, Finite element methods, {NumPy}, Numerical analysis, Performance evaluation, Python, Python programming language, Resource management, Vector quantization, data structures, high level language, high level languages, mathematics computing, numerical analysis, numerical computation, numerical computations, numerical data, numpy array, programming libraries, scientific programming}
}
@article{paszke_automatic_2017,
title = {Automatic differentiation in {PyTorch}},
author = {Paszke, Adam and Gross, Sam and Chintala, Soumith and Chanan, Gregory and Yang, Edward and {DeVito}, Zachary and Lin, Zeming and Desmaison, Alban and Antiga, Luca and Lerer, Adam},
date = {2017}
}
@article{lowekamp_design_2013,
title = {The design of {simpleITK}},
volume = {7},
doi = {10.3389/fninf.2013.00045},
pages = {45},
journaltitle = {Frontiers in neuroinformatics},
author = {Lowekamp, Bradley and Chen, David and Ibanez, Luis and Blezek, Daniel},
date = {2013}
}
@article{oliphant_python_2007,
title = {Python for Scientific Computing},
volume = {9},
url = {https://aip.scitation.org/doi/abs/10.1109/MCSE.2007.58},
doi = {10.1109/MCSE.2007.58},
pages = {10--20},
number = {3},
journaltitle = {Computing in Science \& Engineering},
author = {Oliphant, Travis E.},
date = {2007},
note = {\_eprint: https://aip.scitation.org/doi/pdf/10.1109/{MCSE}.2007.58}
}
@article{jenkinson_fsl_2012,
title = {{FSL}},
volume = {62},
issn = {1053-8119},
url = {http://www.sciencedirect.com/science/article/pii/S1053811911010603},
doi = {https://doi.org/10.1016/j.neuroimage.2011.09.015},
abstract = {{FSL} (the {FMRIB} Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion {MRI} brain imaging data, written mainly by members of the Analysis Group, {FMRIB}, Oxford. For this {NeuroImage} special issue on “20 years of {fMRI}” we have been asked to write about the history, developments and current status of {FSL}. We also include some descriptions of parts of {FSL} that are not well covered in the existing literature. We hope that some of this content might be of interest to users of {FSL}, and also maybe to new research groups considering creating, releasing and supporting new software packages for brain image analysis.},
pages = {782 -- 790},
number = {2},
journaltitle = {{NeuroImage}},
author = {Jenkinson, Mark and Beckmann, Christian F. and Behrens, Timothy E. J. and Woolrich, Mark W. and Smith, Stephen M.},
date = {2012},
keywords = {{FSL}, Software}
}
@article{avants_reproducible_2011,
title = {A reproducible evaluation of {ANTs} similarity metric performance in brain image registration},
volume = {54},
issn = {1053-8119},
url = {http://www.sciencedirect.com/science/article/pii/S1053811910012061},
doi = {https://doi.org/10.1016/j.neuroimage.2010.09.025},
abstract = {The United States National Institutes of Health ({NIH}) commit significant support to open-source data and software resources in order to foment reproducibility in the biomedical imaging sciences. Here, we report and evaluate a recent product of this commitment: Advanced Neuroimaging Tools ({ANTs}), which is approaching its 2.0 release. The {ANTs} open source software library consists of a suite of state-of-the-art image registration, segmentation and template building tools for quantitative morphometric analysis. In this work, we use {ANTs} to quantify, for the first time, the impact of similarity metrics on the affine and deformable components of a template-based normalization study. We detail the {ANTs} implementation of three similarity metrics: squared intensity difference, a new and faster cross-correlation, and voxel-wise mutual information. We then use two-fold cross-validation to compare their performance on openly available, manually labeled, T1-weighted {MRI} brain image data of 40 subjects ({UCLA}'s {LPBA}40 dataset). We report evaluation results on cortical and whole brain labels for both the affine and deformable components of the registration. Results indicate that the best {ANTs} methods are competitive with existing brain extraction results (Jaccard=0.958) and cortical labeling approaches. Mutual information affine mapping combined with cross-correlation diffeomorphic mapping gave the best cortical labeling results (Jaccard=0.669±0.022). Furthermore, our two-fold cross-validation allows us to quantify the similarity of templates derived from different subgroups. Our open code, data and evaluation scripts set performance benchmark parameters for this state-of-the-art toolkit. This is the first study to use a consistent transformation framework to provide a reproducible evaluation of the isolated effect of the similarity metric on optimal template construction and brain labeling.},
pages = {2033 -- 2044},
number = {3},
journaltitle = {{NeuroImage}},
author = {Avants, Brian B. and Tustison, Nicholas J. and Song, Gang and Cook, Philip A. and Klein, Arno and Gee, James C.},
date = {2011}
}
@book{ibanez_itk_2003,
edition = {First},
title = {The {ITK} Software Guide},
publisher = {Kitware, Inc.},
author = {Ibanez, L. and Schroeder, W. and Ng, L. and Cates, J.},
date = {2003}
}
@article{steiner_pytorch_2019,
title = {{PyTorch}: An Imperative Style, High-Performance Deep Learning Library},
author = {Steiner, Benoit and Devito, Zachary and Chintala, Soumith and Gross, Sam and Paszke, Adam and Massa, Francisco and Lerer, Adam and Chanan, Gregory and Lin, Zeming and Yang, Edward and {others}},
date = {2019}
}
@article{goode_openslide_2013,
title = {{OpenSlide}: A vendor-neutral software foundation for digital pathology},
volume = {4},
pages = {27--27},
number = {1},
journaltitle = {Journal of Pathology Informatics},
author = {Goode, Adam and Gilbert, Benjamin and Harkes, Jan and Jukic, Drazen M and Satyanarayanan, Mahadev},
date = {2013}
}
@article{gibson_niftynet_2018,
title = {{NiftyNet}: a deep-learning platform for medical imaging},
volume = {158},
issn = {0169-2607},
url = {http://www.sciencedirect.com/science/article/pii/S0169260717311823},
doi = {https://doi.org/10.1016/j.cmpb.2018.01.025},
abstract = {Background and objectives Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this domain of application requires substantial implementation effort. Consequently, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source {NiftyNet} platform for deep learning in medical imaging. The ambition of {NiftyNet} is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. Methods The {NiftyNet} infrastructure provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the {NiftyNet} pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. {NiftyNet} is built on the {TensorFlow} framework and supports features such as {TensorBoard} visualization of 2D and 3D images and computational graphs by default. Results We present three illustrative medical image analysis applications built using {NiftyNet} infrastructure: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. Conclusions The {NiftyNet} infrastructure enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.},
pages = {113 -- 122},
journaltitle = {Computer Methods and Programs in Biomedicine},
author = {Gibson, Eli and Li, Wenqi and Sudre, Carole and Fidon, Lucas and Shakir, Dzhoshkun I. and Wang, Guotai and Eaton-Rosen, Zach and Gray, Robert and Doel, Tom and Hu, Yipeng and Whyntie, Tom and Nachev, Parashkev and Modat, Marc and Barratt, Dean C. and Ourselin, Sébastien and Cardoso, M. Jorge and Vercauteren, Tom},
date = {2018},
keywords = {Convolutional neural network, Deep learning, Generative adversarial network, Image regression, Medical image analysis, Segmentation}
}
@article{abadi_tensorflow_2016,
title = {{TensorFlow}: A system for large-scale machine learning},
volume = {abs/1605.08695},
url = {http://arxiv.org/abs/1605.08695},
journaltitle = {{CoRR}},
author = {Abadi, Martín and Barham, Paul and Chen, Jianmin and Chen, Zhifeng and Davis, Andy and Dean, Jeffrey and Devin, Matthieu and Ghemawat, Sanjay and Irving, Geoffrey and Isard, Michael and Kudlur, Manjunath and Levenberg, Josh and Monga, Rajat and Moore, Sherry and Murray, Derek Gordon and Steiner, Benoit and Tucker, Paul A. and Vasudevan, Vijay and Warden, Pete and Wicke, Martin and Yu, Yuan and Zhang, Xiaoqiang},
date = {2016},
note = {\_eprint: 1605.08695}
}
@article{jia_caffe_2014,
title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
journaltitle = {{arXiv} preprint {arXiv}:1408.5093},
author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
date = {2014}
}
@article{magee_colour_2009,
title = {Colour Normalisation in Digital Histopathology Images},
journaltitle = {Proc Optical Tissue Image analysis in Microscopy, Histopathology and Endoscopy ({MICCAI} Workshop)},
author = {Magee, Derek and Treanor, Darren and Crellin, Doreen and Shires, Michael and Smith, Katherine and Mohee, Kevin and Quirke, Philip},
date = {2009}
}
@article{vahadane_structure-preserving_2016,
title = {Structure-Preserving Color Normalization and Sparse Stain Separation for Histological Images},
volume = {35},
pages = {1962--1971},
number = {8},
journaltitle = {{IEEE} Transactions on Medical Imaging},
author = {Vahadane, Abhishek and Peng, Tingying and Sethi, Amit and Albarqouni, Shadi and Wang, Lichao and Baust, Maximilian and Steiger, Katja and Schlitter, Anna Melissa and Esposito, Irene and Navab, Nassir},
date = {2016}
}
@article{reinhard_color_2001,
title = {Color Transfer between Images},
volume = {21},
doi = {10.1109/38.946629},
pages = {34--41},
journaltitle = {{IEEE} Computer Graphics and Applications},
author = {Reinhard, Erik and Ashikhmin, Michael and Gooch, Bruce and Shirley, Peter},
date = {2001}
}
@article{srameshkumar_speckle_2016,
title = {Speckle Noise Removal in {MRI} Scan Image Using {WB} – Filter},
volume = {5},
issn = {2319-8753},
number = {12},
journaltitle = {International Journal of Innovative Research in Science Engineering and Technology},
author = {{S.Rameshkumar} and Thilak, J. Anish Jafrin and {Dr.P.Suresh} and {S.Sathishkumar} and {N.Subramani}},
date = {2016-12}
}
@article{ssenthilraja_noise_2014,
title = {Noise Reduction in Computed Tomography Image Using {WB} – Filter},
volume = {5},
issn = {2229-5518},
number = {3},
journaltitle = {International Journal of Scientific \& Engineering Research},
author = {{S.Senthilraja} and {Dr.P.Suresh} and {Dr.M.Suganthi}},
date = {2014-03}
}
@article{tustison_n4itk_2010,
title = {N4ITK: Improved N3 Bias Correction},
volume = {29},
doi = {10.1109/TMI.2010.2046908},
pages = {1310--1320},
number = {6},
journaltitle = {{IEEE} Transactions on Medical Imaging},
author = {Tustison, N. J. and Avants, B. B. and Cook, P. A. and Zheng, Y. and Egan, A. and Yushkevich, P. A. and Gee, J. C.},
date = {2010-06},
keywords = {Algorithms, Approximation algorithms, Artifacts, Availability, B-spline approximation, B-spline least-squares fitting, Brain, Brain modeling, Computer-Assisted, Documentation, Humans, Image Enhancement, Image Interpretation, Image databases, Lungs, Magnetic Resonance Imaging, N3, N4ITK, Reproducibility of Results, Robustness, Sensitivity and Specificity, Spline, Testing, bias correction, bias field, biomedical {MRI}, brain, hierarchical optimization scheme, image analysis, image segmentation, inhomogeneity, lung, lung image data, medical image processing, nonparametric nonuniform intensity normalization}
}
@book{ferdouse_simulation_2011,
title = {Simulation and Performance Analysis of Adaptive Filtering Algorithms in Noise Cancellation},
author = {Ferdouse, Lilatul and Akhter, Nasrin and Nipa, Tamanna Haque and Jaigirdar, Fariha Tasmin},
date = {2011},
note = {\_eprint: 1104.1962}
}
@incollection{ogiela_preprocessing_2008,
location = {Berlin, Heidelberg},
title = {Preprocessing medical images and their overall enhancement},
isbn = {978-3-540-75402-2},
url = {https://doi.org/10.1007/978-3-540-75402-2_4},
abstract = {This chapter briefly discusses the main stages of image preprocessing. The introduction to this book mentioned that the preprocessing of medical image is subject to certain restrictions and is generally more complex than the processing of other image types [26, 52]. This is why, of the many different techniques and methods for image filtering, we have decided to discuss here only selected ones, most frequently applied to medical images and which have been proven to be suitable for that purpose in numerous practical cases. Their operation will be illustrated with examples of simple procedures aimed at improving the quality of imaging and allowing significant information to be generated for its use at the stages of image interpretation.},
pages = {65--97},
booktitle = {Modern Computational Intelligence Methods for the Interpretation of Medical Images},
publisher = {Springer Berlin Heidelberg},
author = {Ogiela, Marek R. and Tadeusiewicz, Ryszard},
date = {2008},
doi = {10.1007/978-3-540-75402-2_4}
}
@article{jeyavathana_survey_2016,
title = {A Survey: Analysis on Pre-processing and Segmentation Techniques for Medical Images},
journaltitle = {International Journal of Research and Scientific Innovation ({IJRSI})},
author = {Jeyavathana, R and Ramasamy, Balasubramanian and Pandian, Anbarasa},
date = {2016}
}
@article{yao_survey_2017,
title = {A Survey on Pre-Processing in Image Matting},
volume = {32},
issn = {1860-4749},
url = {https://doi.org/10.1007/s11390-017-1709-z},
doi = {10.1007/s11390-017-1709-z},
abstract = {Pre-processing is an important step in digital image matting, which aims to classify more accurate foreground and background pixels from the unknown region of the input three-region mask (Trimap). This step has no relation with the well-known matting equation and only compares color differences between the current unknown pixel and those known pixels. These newly classified pure pixels are then fed to the matting process as samples to improve the quality of the final matte. However, in the research field of image matting, the importance of pre-processing step is still blurry. Moreover, there are no corresponding review articles for this step, and the quantitative comparison of Trimap and alpha mattes after this step still remains unsolved. In this paper, the necessity and the importance of pre-processing step in image matting are firstly discussed in details. Next, current pre-processing methods are introduced by using the following two categories: static thresholding methods and dynamic thresholding methods. Analyses and experimental results show that static thresholding methods, especially the most popular iterative method, can make accurate pixel classifications in those general Trimaps with relatively fewer unknown pixels. However, in a much larger Trimap, there methods are limited by the conservative color and spatial thresholds. In contrast, dynamic thresholding methods can make much aggressive classifications on much difficult cases, but still strongly suffer from noises and false classifications. In addition, the sharp boundary detector is further discussed as a prior of pure pixels. Finally, summaries and a more effective approach are presented for pre-processing compared with the existing methods.},
pages = {122--138},
number = {1},
journaltitle = {Journal of Computer Science and Technology},
author = {Yao, Gui-Lin},
date = {2017}
}
@article{radul_functional_2001,
title = {Functional Representations of Lawson Monads},
volume = {9},
pages = {457--463},
journaltitle = {Applied Categorical Structures},
author = {Radul, Taras},
date = {2001}
}
@inproceedings{lee_survey_2015,
title = {A survey of medical image processing tools},
doi = {10.1109/ICSECS.2015.7333105},
pages = {171--176},
booktitle = {2015 4th International Conference on Software Engineering and Computer Systems ({ICSECS})},
author = {Lee, L. and Liew, S.},
date = {2015},
keywords = {Biomedical image processing, Image segmentation, Medical diagnostic imaging, Software, clinical study, computer vision, diagnostic radiography, graphical schematic diagram, image processing, medical image processing, medical image processing software tool, medical image processing tools, operating systems, pipelined processors, radiation therapy, radiographic techniques, radiotherapy preparation, software tools, tools component, treatment planning}
}
@book{crankshaw_inferline_2018,
title = {{InferLine}: {ML} Inference Pipeline Composition Framework},
author = {Crankshaw, Daniel and Sela, Gur-Eyal and Zumar, Corey and Mo, Xiangxi and Gonzalez, Joseph E. and Stoica, Ion and Tumanov, Alexey},
date = {2018},
note = {\_eprint: 1812.01776}
}
@book{rajchl_neuronet_2018,
title = {{NeuroNet}: Fast and Robust Reproduction of Multiple Brain Image Segmentation Pipelines},
author = {Rajchl, Martin and Pawlowski, Nick and Rueckert, Daniel and Matthews, Paul M. and Glocker, Ben},
date = {2018},
note = {\_eprint: 1806.04224}
}
@book{rajan_pi-pe_2019,
title = {Pi-{PE}: A Pipeline for Pulmonary Embolism Detection using Sparsely Annotated 3D {CT} Images},
author = {Rajan, Deepta and Beymer, David and Abedin, Shafiqul and Dehghan, Ehsan},
date = {2019},
note = {\_eprint: 1910.02175}
}
@book{zhang_leveraging_2019,
title = {Leveraging Vision Reconstruction Pipelines for Satellite Imagery},
author = {Zhang, Kai and Sun, Jin and Snavely, Noah},
date = {2019},
note = {\_eprint: 1910.02989}
}
@book{skibbe_marmonet_2019,
title = {{MarmoNet}: a pipeline for automated projection mapping of the common marmoset brain from whole-brain serial two-photon tomography},
author = {Skibbe, Henrik and Watakabe, Akiya and Nakae, Ken and Gutierrez, Carlos Enrique and Tsukada, Hiromichi and Hata, Junichi and Kawase, Takashi and Gong, Rui and Woodward, Alexander and Doya, Kenji and Okano, Hideyuki and Yamamori, Tetsuo and Ishii, Shin},
date = {2019},
note = {\_eprint: 1908.00876}
}
@book{yang_xlnet_2019,
title = {{XLNet}: Generalized Autoregressive Pretraining for Language Understanding},
author = {Yang, Zhilin and Dai, Zihang and Yang, Yiming and Carbonell, Jaime and Salakhutdinov, Ruslan and Le, Quoc V.},
date = {2019},
note = {\_eprint: 1906.08237}
}
@article{chen_dual_2017,
title = {Dual Path Networks},
journaltitle = {{arXiv}: Computer Vision and Pattern Recognition},
author = {Chen, Yunpeng and Li, Jianan and Xiao, Huaxin and Jin, Xiaojie and Yan, Shuicheng and Feng, Jiashi},
date = {2017}
}
@article{he_mask_2017,
title = {Mask R-{CNN}},
journaltitle = {{arXiv}: Computer Vision and Pattern Recognition},
author = {He, Kaiming and Gkioxari, Georgia and Dollar, Piotr and Girshick, Ross},
date = {2017}
}
@book{simonyan_very_2014,
title = {Very Deep Convolutional Networks for Large-Scale Image Recognition},
author = {Simonyan, Karen and Zisserman, Andrew},
date = {2014},
note = {\_eprint: 1409.1556}
}
@inproceedings{trullo_segmentation_2017,
title = {Segmentation of Organs at Risk in thoracic {CT} images using a {SharpMask} architecture and Conditional Random Fields},
volume = {2017},
doi = {10.1109/ISBI.2017.7950685},
pages = {1003--1006},
booktitle = {Proceedings. {IEEE} International Symposium on Biomedical Imaging},
author = {Trullo, R. and Petitjean, Caroline and Ruan, Su and Dubray, Bernard and Nie, D. and Shen, D.},
date = {2017}
}
@book{berman_lovasz-softmax_2017,
title = {The Lovász-Softmax loss: A tractable surrogate for the optimization of the intersection-over-union measure in neural networks},
author = {Berman, Maxim and Triki, Amal Rannen and Blaschko, Matthew B.},
date = {2017},
note = {\_eprint: 1705.08790}
}
@book{milletari_v-net_2016,
title = {V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation},
author = {Milletari, Fausto and Navab, Nassir and Ahmadi, Seyed-Ahmad},
date = {2016},
note = {\_eprint: 1606.04797}
}
@incollection{rumelhart_neurocomputing_1988,
location = {Cambridge, {MA}, {USA}},
title = {Neurocomputing: Foundations of Research},
isbn = {0-262-01097-6},
url = {http://dl.acm.org/citation.cfm?id=65669.104451},
pages = {696--699},
publisher = {{MIT} Press},
author = {Rumelhart, David E. and Hinton, Geoffrey E. and Williams, Ronald J.},
editor = {Anderson, James A. and Rosenfeld, Edward},
date = {1988},
note = {Section: Learning Representations by Back-propagating Errors}
}
@article{werbos_beyond_1974,
title = {Beyond regression : new tools for prediction and analysis in the behavioral sciences /},
author = {Werbos, Paul and J. (Paul John, Paul},
date = {1974}
}
@article{ruder_overview_2016,
title = {An overview of gradient descent optimization algorithms},
volume = {abs/1609.04747},
url = {http://arxiv.org/abs/1609.04747},
journaltitle = {{CoRR}},
author = {Ruder, Sebastian},
date = {2016},
note = {\_eprint: 1609.04747}
}
@article{ioffe_batch_2015,
title = {Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift},
volume = {abs/1502.03167},
url = {http://arxiv.org/abs/1502.03167},
journaltitle = {{CoRR}},
author = {Ioffe, Sergey and Szegedy, Christian},
date = {2015},
note = {\_eprint: 1502.03167}
}
@article{srivastava_dropout_2014,
title = {Dropout: A Simple Way to Prevent Neural Networks from Overfitting},
volume = {15},
url = {http://jmlr.org/papers/v15/srivastava14a.html},
pages = {1929--1958},
journaltitle = {Journal of Machine Learning Research},
author = {Srivastava, Nitish and Hinton, Geoffrey and Krizhevsky, Alex and Sutskever, Ilya and Salakhutdinov, Ruslan},
date = {2014}
}
@online{noauthor_stochastic_2017,
title = {Stochastic gradient descent},
url = {https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Extensions_and_variants},
date = {2017-08}
}
@incollection{bottou_stochastic_2012,
title = {Stochastic Gradient Descent Tricks},
url = {https://doi.org/10.1007/978-3-642-35289-8_25},
pages = {421--436},
booktitle = {Neural Networks: Tricks of the Trade - Second Edition},
author = {Bottou, Léon},
date = {2012},
doi = {10.1007/978-3-642-35289-8_25}
}
@unpublished{zhang_unknow_nodate,
title = {unknow},
author = {Zhang, Liang and Kong, Xiangwen}
}
@unpublished{zhang_block_nodate,
title = {Block Level Skip Connections across Cascaded V-Net for Multi-organ Segmentation {\textbackslash}Huge todo, and this paper is under publishing},
author = {Zhang, Liang and Zhang, Jiaming}
}
@unpublished{zhang_u-net_nodate,
title = {U-net based analysis of {MRI} for Alzheimer’s disease diagnosis {\textbackslash}Huge todo, and this paper is under publishing},
author = {Zhang, Liang and Fan, Zhonghao}
}
@article{zhao_deep_2018,
title = {A deep learning model integrating {FCNNs} and {CRFs} for brain tumor segmentation},
volume = {43},
issn = {1361-8415},
url = {http://www.sciencedirect.com/science/article/pii/S136184151730141X},
doi = {https://doi.org/10.1016/j.media.2017.10.002},
abstract = {Accurate and reliable brain tumor segmentation is a critical component in cancer diagnosis, treatment planning, and treatment outcome evaluation. Build upon successful deep learning techniques, a novel brain tumor segmentation method is developed by integrating fully convolutional neural networks ({FCNNs}) and Conditional Random Fields ({CRFs}) in a unified framework to obtain segmentation results with appearance and spatial consistency. We train a deep learning based segmentation model using 2D image patches and image slices in following steps: 1) training {FCNNs} using image patches; 2) training {CRFs} as Recurrent Neural Networks ({CRF}-{RNN}) using image slices with parameters of {FCNNs} fixed; and 3) fine-tuning the {FCNNs} and the {CRF}-{RNN} using image slices. Particularly, we train 3 segmentation models using 2D image patches and slices obtained in axial, coronal and sagittal views respectively, and combine them to segment brain tumors using a voting based fusion strategy. Our method could segment brain images slice-by-slice, much faster than those based on image patches. We have evaluated our method based on imaging data provided by the Multimodal Brain Tumor Image Segmentation Challenge ({BRATS}) 2013, {BRATS} 2015 and {BRATS} 2016. The experimental results have demonstrated that our method could build a segmentation model with Flair, T1c, and T2 scans and achieve competitive performance as those built with Flair, T1, T1c, and T2 scans.},
pages = {98 -- 111},
journaltitle = {Medical Image Analysis},
author = {Zhao, Xiaomei and Wu, Yihong and Song, Guidong and Li, Zhenye and Zhang, Yazhuo and Fan, Yong},
date = {2018},
keywords = {Brain tumor segmentation, Conditional random fields, Deep learning, Fully convolutional neural networks}
}
@article{graham_xy_2018,
title = {{XY} Network for Nuclear Segmentation in Multi-Tissue Histology Images},
volume = {abs/1812.06499},
url = {http://arxiv.org/abs/1812.06499},
journaltitle = {{CoRR}},
author = {Graham, Simon and Vu, Quoc Dang and Raza, Shan e Ahmed and Kwak, Jin Tae and Rajpoot, Nasir M.},
date = {2018},
note = {\_eprint: 1812.06499}
}
@inproceedings{khagi_alzheimers_2019,
title = {Alzheimer's disease Classification from Brain {MRI} based on transfer learning from {CNN}},
doi = {10.1109/BMEiCON.2018.8609974},
author = {Khagi, Bijen and Lee, Chung and Kwon, Goo-Rak},
date = {2019}
}
@inproceedings{khvostikov_classification_2017,
title = {Classification methods on different imaging modalities for Alzheimer disease studies},
author = {Khvostikov, Alexander and Benois-Pineau, Jenny and Krylov, Andrey and Catheline, Gwenaelle},
date = {2017}
}
@article{lu_automatic_2017,
title = {Automatic 3D liver location and segmentation via convolutional neural network and graph cut},
volume = {12},
issn = {1861-6429},
url = {https://doi.org/10.1007/s11548-016-1467-3},
doi = {10.1007/s11548-016-1467-3},
abstract = {Segmentation of the liver from abdominal computed tomography ({CT}) images is an essential step in some computer-assisted clinical interventions, such as surgery planning for living donor liver transplant, radiotherapy and volume measurement. In this work, we develop a deep learning algorithm with graph cut refinement to automatically segment the liver in {CT} scans.},
pages = {171--182},
number = {2},
journaltitle = {International Journal of Computer Assisted Radiology and Surgery},
author = {Lu, Fang and Wu, Fa and Hu, Peijun and Peng, Zhiyi and Kong, Dexing},
date = {2017}
}
@article{naylor_segmentation_2019,
title = {Segmentation of Nuclei in Histopathology Images by Deep Regression of the Distance Map},
volume = {38},
doi = {10.1109/TMI.2018.2865709},
pages = {448--459},
number = {2},
journaltitle = {{IEEE} Transactions on Medical Imaging},
author = {Naylor, P. and Laé, M. and Reyal, F. and Walter, T.},
date = {2019},
keywords = {Biology, Cancer, Cancer research, Computer architecture, Haematoxylin and Eosin stained histopathology data, Image segmentation, Pathology, Task analysis, Tumors, biological tissues, cancer, cell nuclei, cellular biophysics, deep learning, deep regression, digital pathology, diseased tissue, diseases, distance map, fully convolutional networks, histopathology, histopathology data, histopathology images, image segmentation, interpretable models, medical image processing, neural nets, nuclei segmentation, patient diagnosis, prognosis tasks, quantitative profiles, regression task, segmentation problem}
}
@article{tofighi_prior_2019,
title = {Prior Information Guided Regularized Deep Learning for Cell Nucleus Detection},
volume = {38},
doi = {10.1109/TMI.2019.2895318},
pages = {2047--2058},
number = {9},
journaltitle = {{IEEE} Transactions on Medical Imaging},
author = {Tofighi, M. and Guo, T. and Vanamala, J. K. P. and Monga, V.},
date = {2019-09},
keywords = {Biomedical imaging, Computer architecture, Deep learning, Image edge detection, Image segmentation, Microprocessors, Nucleus detection, Shape, {TSP}-{CNN}, biology computing, canonical cell nuclei shapes, cell nuclei detection, cell nucleus boundary, cell nucleus detection, cellular biophysics, cellular image quality, convolutional neural nets, convolutional neural networks, deep learning, deep learning methods, domain expert, fixed processing part, input images, labeled nuclei locations, learnable layers, learnable shapes, learning (artificial intelligence), medical image processing, morphological processing, multiple cell nuclei, network structures, nuclear morphology, nucleus shapes, regularization terms, shape priors, spatial processing, training set, tunable {SP}-{CNN}}
}
@inproceedings{cai_pancreas_2017,
location = {Cham},
title = {Pancreas Segmentation in {MRI} Using Graph-Based Decision Fusion on Convolutional Neural Networks},
isbn = {978-3-319-66179-7},
abstract = {Deep neural networks have demonstrated very promising performance on accurate segmentation of challenging organs (e.g., pancreas) in abdominal {CT} and {MRI} scans. The current deep learning approaches conduct pancreas segmentation by processing sequences of 2D image slices independently through deep, dense per-pixel masking for each image, without explicitly enforcing spatial consistency constraint on segmentation of successive slices. We propose a new convolutional/recurrent neural network architecture to address the contextual learning and segmentation consistency problem. A deep convolutional sub-network is first designed and pre-trained from scratch. The output layer of this network module is then connected to recurrent layers and can be fine-tuned for contextual learning, in an end-to-end manner. Our recurrent sub-network is a type of Long short-term memory ({LSTM}) network that performs segmentation on an image by integrating its neighboring slice segmentation predictions, in the form of a dependent sequence processing. Additionally, a novel segmentation-direct loss function (named Jaccard Loss) is proposed and deep networks are trained to optimize Jaccard Index ({JI}) directly. Extensive experiments are conducted to validate our proposed deep models, on quantitative pancreas segmentation using both {CT} and {MRI} scans. Our method outperforms the state-of-the-art work on {CT} [11] and {MRI} pancreas segmentation [1], respectively.},
pages = {674--682},
booktitle = {Medical Image Computing and Computer Assisted Intervention − {MICCAI} 2017},
publisher = {Springer International Publishing},
author = {Cai, Jinzheng and Lu, Le and Xie, Yuanpu and Xing, Fuyong and Yang, Lin},
editor = {Descoteaux, Maxime and Maier-Hein, Lena and Franz, Alfred and Jannin, Pierre and Collins, D. Louis and Duchesne, Simon},
date = {2017}
}
@inproceedings{zhou_fixed-point_2017,
location = {Cham},
title = {A Fixed-Point Model for Pancreas Segmentation in Abdominal {CT} Scans},
isbn = {978-3-319-66182-7},
abstract = {Deep neural networks have been widely adopted for automatic organ segmentation from abdominal {CT} scans. However, the segmentation accuracy of some small organs (e.g., the pancreas) is sometimes below satisfaction, arguably because deep networks are easily disrupted by the complex and variable background regions which occupies a large fraction of the input volume. In this paper, we formulate this problem into a fixed-point model which uses a predicted segmentation mask to shrink the input region. This is motivated by the fact that a smaller input region often leads to more accurate segmentation. In the training process, we use the ground-truth annotation to generate accurate input regions and optimize network weights. On the testing stage, we fix the network parameters and update the segmentation results in an iterative manner. We evaluate our approach on the {NIH} pancreas segmentation dataset, and outperform the state-of-the-art by more than \$\$4{\textbackslash}backslash\%\$\$, measured by the average Dice-Sørensen Coefficient ({DSC}). In addition, we report \$\$62.43{\textbackslash}backslash\%\$\${DSC} in the worst case, which guarantees the reliability of our approach in clinical applications.},
pages = {693--701},
booktitle = {Medical Image Computing and Computer Assisted Intervention − {MICCAI} 2017},
publisher = {Springer International Publishing},
author = {Zhou, Yuyin and Xie, Lingxi and Shen, Wei and Wang, Yan and Fishman, Elliot K. and Yuille, Alan L.},
editor = {Descoteaux, Maxime and Maier-Hein, Lena and Franz, Alfred and Jannin, Pierre and Collins, D. Louis and Duchesne, Simon},
date = {2017}
}
@inproceedings{dou_3d_2016,
location = {Cham},
title = {3D Deeply Supervised Network for Automatic Liver Segmentation from {CT} Volumes},
isbn = {978-3-319-46723-8},
abstract = {Automatic liver segmentation from {CT} volumes is a crucial prerequisite yet challenging task for computer-aided hepatic disease diagnosis and treatment. In this paper, we present a novel 3D deeply supervised network (3D {DSN}) to address this challenging task. The proposed 3D {DSN} takes advantage of a fully convolutional architecture which performs efficient end-to-end learning and inference. More importantly, we introduce a deep supervision mechanism during the learning process to combat potential optimization difficulties, and thus the model can acquire a much faster convergence rate and more powerful discrimination capability. On top of the high-quality score map produced by the 3D {DSN}, a conditional random field model is further employed to obtain refined segmentation results. We evaluated our framework on the public {MICCAI}-{SLiver}07 dataset. Extensive experiments demonstrated that our method achieves competitive segmentation results to state-of-the-art approaches with a much faster processing speed.},
pages = {149--157},
booktitle = {Medical Image Computing and Computer-Assisted Intervention – {MICCAI} 2016},
publisher = {Springer International Publishing},
author = {Dou, Qi and Chen, Hao and Jin, Yueming and Yu, Lequan and Qin, Jing and Heng, Pheng-Ann},
editor = {Ourselin, Sebastien and Joskowicz, Leo and Sabuncu, Mert R. and Unal, Gozde and Wells, William},
date = {2016}
}
@article{song_multi-layer_2019,
title = {Multi-layer boosting sparse convolutional model for generalized nuclear segmentation from histopathology images},
volume = {176},
issn = {0950-7051},
url = {http://www.sciencedirect.com/science/article/pii/S095070511930156X},
doi = {https://doi.org/10.1016/j.knosys.2019.03.031},
abstract = {It is a challenging problem to achieve generalized nuclear segmentation in digital histopathology images. Existing techniques, using either handcrafted features in learning-based models or traditional image analysis-based approaches, do not effectively tackle the challenging cases, such as crowded nuclei, chromatin-sparse, and heavy background clutter. In contrast, deep networks have achieved state-of-the-art performance in modeling various nuclear appearances. However, their success is limited due to the size of the considered networks. We solve these problems by reformulating nuclear segmentation in terms of a cascade 2-class classification problem and propose a multi-layer boosting sparse convolutional ({ML}-{BSC}) model. In the proposed {ML}-{BSC} model, discriminative probabilistic binary decision trees ({PBDTs}) are designed as weak learners in each layer to cope with challenging cases. A sparsity-constrained cascade structure enables the {ML}-{BSC} model to improve representation learning. Comparing to the existing techniques, our method can accurately separate individual nuclei in complex histopathology images, and it is more robust against chromatin-sparse and heavy background clutter. An evaluation carried out using three disparate datasets demonstrates the superiority of our method over the state-of-the-art supervised approaches in terms of segmentation accuracy.},
pages = {40 -- 53},
journaltitle = {Knowledge-Based Systems},
author = {Song, Jie and Xiao, Liang and Molaei, Mohsen and Lian, Zhichao},
date = {2019},
keywords = {Cascade classification, Multi-layer boosting sparse convolutional model, Nucleus segmentation, Probabilistic binary decision tree, Representation learning}
}
@article{qaiser_fast_2019,
title = {Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features},
volume = {55},
issn = {1361-8415},
url = {http://www.sciencedirect.com/science/article/pii/S1361841518302688},
doi = {https://doi.org/10.1016/j.media.2019.03.014},
abstract = {Tumor segmentation in whole-slide images of histology slides is an important step towards computer-assisted diagnosis. In this work, we propose a tumor segmentation framework based on the novel concept of persistent homology profiles ({PHPs}). For a given image patch, the homology profiles are derived by efficient computation of persistent homology, which is an algebraic tool from homology theory. We propose an efficient way of computing topological persistence of an image, alternative to simplicial homology. The {PHPs} are devised to distinguish tumor regions from their normal counterparts by modeling the atypical characteristics of tumor nuclei. We propose two variants of our method for tumor segmentation: one that targets speed without compromising accuracy and the other that targets higher accuracy. The fast version is based on a selection of exemplar image patches from a convolution neural network ({CNN}) and patch classification by quantifying the divergence between the {PHPs} of exemplars and the input image patch. Detailed comparative evaluation shows that the proposed algorithm is significantly faster than competing algorithms while achieving comparable results. The accurate version combines the {PHPs} and high-level {CNN} features and employs a multi-stage ensemble strategy for image patch labeling. Experimental results demonstrate that the combination of {PHPs} and {CNN} features outperform competing algorithms. This study is performed on two independently collected colorectal datasets containing adenoma, adenocarcinoma, signet, and healthy cases. Collectively, the accurate tumor segmentation produces the highest average patch-level F1-score, as compared with competing algorithms, on malignant and healthy cases from both the datasets. Overall the proposed framework highlights the utility of persistent homology for histopathology image analysis.},
pages = {1 -- 14},