Vesselin Petkov IntroductionShort CVinsert short SV here Full profileThis is the longer CV, presenting education, work experience, etc. Publications Jurafsky, Daniel, and James Martin. Speech and Language Processing., 2019. Stanford Dependencies., 2020. Universal Dependency Relations., 2020. Stanford Core NLP., 2020. Aeffner, Famke, Mark Zarella, Nathan Buchbinder, Marilyn M. Bui, Matthew Goodman, Douglas Hartman, Giovanni Lujan, Mariam Molani, Anil Parwani, Kate Lillard et al. "Introduction to digital image analysis in whole-slide imaging: a white paper from the digital pathology association." Journal of pathology informatics 10, no. 1 (2019): 9. Afifi, Mahmoud, and Michael S. Brown. What else can fool deep learning? addressing color constancy errors on deep neural network performance In Proceedings of the IEEE International Conference on Computer Vision., 2019. Akhtar, Naveed, and Ajmal Mian. "Threat of adversarial attacks on deep learning in computer vision: A survey." IEEE Access 6 (2018): 14410-14430. Al-Janabi, Shaimaa, Henk-Jan van Slooten, Mike Visser, Tjeerd van der Ploeg, Paul J. van Diest, and Mehdi Jiwa. "Evaluation of mitotic activity index in breast cancer using whole slide digital images." PloS one 8, no. 12 (2013). Armanious, Karim, Youssef Mecky, Sergios Gatidis, and Bin Yang. Adversarial inpainting of medical image modalities In In: ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)., 2019. Athalye, Anish, Logan Engstrom, Andrew Ilyas, and Kevin Kwok. Synthesizing robust adversarial examples In 35th International Conference on Machine Learning, PMLR. Vol. 80. Stockholm, Sweden, 2018. Brown, T.B., D. Mané, A. Roy, M. Abadi, and J. Gilmer. "Adversarial Patch." arXiv e-prints (2018). Chuquicusma, Maria, Sarfaraz Hussein, Jeremy Burt, and Ulas Bagci. How to fool radiologists with generative adversarial networks? a visual turing test for lung cancer diagnosis In 2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018)., 2018. Deng, Yepeng, Chunkai Zhang, and Xuan Wang. A multi-objective examples generation approach to fool the deep neural networks in the black-box scenario In 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC)., 2019. Finlayson, Samuel, Hyung Won Chung, Isaac Kohane, and Andrew Beam. "Adversarial attacks against medical deep learning systems." arXiv e-print (2019). Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning . MIT Press, 2016. Gu, Tianyu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. "Badnets: Evaluating backdooring attacks on deep neural networks." IEEE Access 7 (2019): 47230-47244. Gu, Zhaoquan, Weixiong Hu, Chuanjing Zhang, Hui Lu, Lihua Yin, and Le Wang. "Gradient shielding: Towards understanding vulnerability of deep neural networks." In IEEE Transactions on Network Science and Engineering., 2020. Junqueira, Luis Carlos, and Jose Carneiro. Basic Histology Text & Atlas. McGraw-Hill Professional, 2005. Kieffer, Brady, Morteza Babaie, Shivam Kalra, and H.R.Tizhoosh. Convolutional neural networks for histopathology image classification: Training vs. using pre-trained networks In 2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)., 2017. Komura, Daisuke, and Shumpei Ishikawa. "Machine learning methods for histopathological image analysis." Computational and structural biotechnology journal 16 (2018): 34-42. Kügler, David, Alexander Distergoft, Arjan Kuijper, and Anirban Mukhopadhyay. "Exploring adversarial examples." In Understanding and Interpreting Machine Learning in Medical Image Computing Applications, 70-78. Springer, 2018. Kumar, Neeraj, Ruchika Verma, Sanuj Sharma, Surabhi Bhargava, Abhishek Vahadane, and Amit Sethi. "A dataset and a technique for generalized nuclear segmentation for computational pathology." IEEE transactions on medical imaging 36, no. 7 (2017): 1550-1560. Kumar, Vinay, Abul Abbas, and Jon Aster. Robbins basic pathology. Philadelphia, USA, Saunders: Elsevier, 2017. Madry, Aleksander, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. "Towards Deep Learning Models Resistant to Adversarial Attacks." arXiv e-prints arXiv:1706.06083 (2019). Mihajlović, Marko, and Nikola Popović. Fooling a neural network with common adversarial noise In 2018 19th IEEE Mediterranean Electrotechnical Conference (MELECON)., 2018. Pages« първа ‹ предишна … 433 434 435 436 437 438 439 440 441 … следваща › последна »