Overview

The “Gen AI for E-commerce” workshop explores the role of Generative Artificial Intelligence in transforming e-commerce through enhanced user experience and operational efficiency. E-commerce companies grapple with multiple challenges such as lack of quality content for products, subpar user experience, sparse datasets etc. Gen AI offers significant potential to address these complexities. Yet, deploying these technologies at scale presents challenges such as hallucination in data, excessive costs, increased latency response, and limited generalization in sparse data environments. This workshop will bring together experts from academia and industry to discuss these challenges and opportunities, aiming to showcase case studies, breakthroughs, and insights into practical implementations of Gen AI in e-commerce.

Call for papers

We will welcome papers that leverage Generative Artificial Intelligence (Gen AI) in e-commerce. Detailed topics are mentioned in CFP. Papers can be submitted at Easychair.

Information for the day of the workshop

Workshop at CIKM2024

  • Paper submission deadline: 16th August 2024
  • Paper acceptance notification: 30 August 2024
  • Workshop: 25 October 2024

Schedule

We have a full-day program on Oct 25 at Room no. 110 BC Boise Centre, Boise, Idaho.

Time (PDT) Agenda
9:00-9:10am Registration and Welcome
9:10-9:50am **Keynote by Himabindu Lakkaraju : Mechanics and Ethics of Search Engine Optimization in the Era of LLMs **
9:50-10:30am Keynote by Ning Xia (40 min)
10:30-11:00am Coffee Break
11:00-11:15am **Paper Presentation: LLM-Modulo-Rec: Leveraging Approximate World-Knowledge of LLMs to Improve eCommerce Search Ranking Under Data Paucity (10 min) + Q&A (5 min) **
11:15-11:30am **Paper Presentation: Hierarchical Knowledge Graph Construction from Images for Scalable E-Commerce (10 min) + Q&A (5 min) **
11:30-12:10pm Keynote by Manisha Verma : Harnessing creativity with foundational model (40 min)
12:10-12:25pm **Paper Presentation: ReScorer- An Aggregation and Alignment Technique for Building Trust into LLM Reason (10 min) + Q&A (5 min) **
12:25-12:40pm **Paper Presentation: Decoding Style: Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Feedback (10 min) + Q&A (5 min) **
12:45-1:45pm Lunch Break
1:45-2:25pm Keynote by Vinodh Kumar Sunkara (40 min)
2:25-3:30pm Poster Session
3:30-4:00pm Coffee Break
4:00-5:00pm GenAI for Ecommerce Search & Recommendations (60 min)
Tracy Holloway King
Panelists: Vamsi Salaka (Amazon), Dingxian Wang (Upwork), Topojoy Biswas (Walmart), Aditya Chichani (Walmart)

Keynote Speakers

Himabindu Lakkaraju

Himabindu Lakkaraju

Harvard University
Mechanics and Ethics of Search Engine Optimization in the Era of LLMs

Abstract
Abstract: Abstract.
Bio
Bio: Himabindu (Hima) Lakkaraju is an assistant professor at Harvard University focusing on explainability, fairness, and robustness of machine learning models. She has also been working with various domain experts in policy and healthcare to understand the real-world implications of explainable and fair ML. Hima has been named as one of the world’s top innovators under 35 by both MIT Tech Review and Vanity Fair. Her research has also received best paper awards at SIAM International Conference on Data Mining (SDM) and INFORMS, and grants from NSF, Google, Amazon, and Bayer. Hima has given keynote talks at various top ML conferences and workshops including CIKM, ICML, NeurIPS, AAAI, and CVPR, and her research has also been showcased by popular media outlets including the New York Times, MIT Tech Review, TIME magazine, and Forbes. More recently, she co-founded the Trustworthy ML Initiative to enable easy access to resources on trustworthy ML and to build a community of researchers, practitioners working on the topic.
                                                                                                                                                                                               
Manisha Verma

Manisha Verma

JP Morgan Chase
Harnessing creativity with foundational models!

Abstract
Abstract: We all have witnessed advertisements in some form or another through our childhood and adult lives. However, there is slowly a paradigm shift in how these advertisements are created (sometimes at scale) today with the proliferation of both language and multimodal models. Advertising is a high churn, seasonal and personalized industry, which poses interesting challenges for developing, evaluating, and productionizing these models. While we wish to harness the inherent creativity of these models (hallucinations here are a good thing for once!), there are critical things that still need to be factually correct, free from location, population, or gender bias especially when we want to reach millions of users in real time. In this talk I wish to give you an overview of some of our past work, recent developments and our experience in launching some interesting tools for advertisers at scale. You can find ppt at /assets/papers/GenAIECommerce2024/creative_generation_sota.pptx.pdf
Bio
Bio: Manisha Verma is a Scientist at Amazon, NYC. She completed her PhD from University College London. Some of her recent work has been published at conferences such as WWW, RecSys, CIKM, WSDM and SIGIR. Over the past few years, she has worked with researchers at Google, Microsoft and Yahoo on improving advertisements. She has served on the program committee for WWW’22, SIGIR’21, ECIR’21, NAACL’20, NeuroIR17, DSHCM’17,’18, LearnIR’18 and CIKM’21.
                                                                                                                                                                                               
Xia Ning

Xia Ning

Ohio State University
Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data

Abstract
Abstract: With tremendous efforts in developing effective e-commerce models, conventional e-commerce models show limited success in generalist e-commerce modeling, and suffer from unsatisfactory performance on new users and new products – a typical out-of-domain generalization challenge. Meanwhile, large language models (LLMs) demonstrate outstanding performance in generalist modeling and out-of-domain generalizability in many fields. Toward fully unleashing their power for e- commerce, in this talk, I will present ECInstruct, the first open-sourced, large-scale, and high-quality benchmark instruction dataset for e-commerce. Leveraging ECInstruct, we develop eCeLLM, a series of e-commerce LLMs, by instruction-tuning general-purpose LLMs. Our comprehensive experiments and evaluation demonstrate that eCeLLM models substantially outperform baseline models, including the most advanced GPT-4 and the state-of-the-art task-specific models in in-domain evaluation. Moreover, eCeLLM exhibits excellent generalizability to out of-domain settings, including unseen products and unseen instructions, highlighting its superiority as a generalist e-commerce model. Both the ECInstruct dataset and the eCeLLM models show great potential in empowering versatile and effective LLMs for e- commerce. ECInstruct and eCeLLM models are publicly accessible through https://ninglab.github.io/eCeLLM/.
Bio
Bio: Dr. Xia Ning is a Professor in the Biomedical Informatics Department (BMI), and the Computer Science and Engineering Department, The Ohio State University. She also holds a courtesy appointment with the Division of Medicinal Chemistry and Pharmacognosy, College of Pharmacy at OSU. She is the Section Chief of AI, Clinical Informatics, and Implementation Science at BMI, and the Associate Director of Biomedical Informatics at OSU Center for Clinical and Translational Science Institute (CTSI). She received her Ph.D. in Computer Science and Engineering from the University of Minnesota, Twin Cities, in 2012. Ning’s research is on Artificial Intelligence (AI) and Machine Learning with applications in health care and e-commerce. Her work on “SLIM: Sparse linear methods for top-n recommender systems” received the 10-Years-Highest-Impact-Paper Award, IEEE International Conference on Data Mining (ICDM) in 2020.
                                                                                                                                                                                               
Vinodh Kumar Sunkara

Vinodh Kumar Sunkara

Meta
Title of the talk: LLM integrated Meta Ad Promotion Sourcing

Abstract
Abstract: In this talk, we will explore the end-to-end delivery journey of promotional ads at Meta and how Large Language Models (LLMs) can be leveraged to extract promotional information from various sources such as advertisers, website crawling, and product catalogs. We will discuss the challenges associated with traditional ML/Regex-based approaches for promo extraction and how in-context learning and prompt optimizations can further improve performance of LLM solutions. We will also delve into the technical details of scaling GenAI infrastructure to launch in production and the offline/online evaluation criteria used to assess the right candidate.
Bio
Bio: With ~6.5 years of experience at Meta on different ML/AI products including Search, Recommendations, Integrity and Ads. Currently, EM for Promo Ads team, solving sourcing and personalization problems for promo ad creation. Educational background: MBA, Berkeley Haas; MS in CS, ML/AI specialization, GeorgiaTech; B.Tech in CS, IIT Kharagpur.
                                                                                                                                                                                               

Accepted Papers

  • Multimodal Arabic Negotiation Bots
    Samah Albast, Wassim El-Hajj, Hazem Hajj, Khaled Shaban and Shady Elbassuoni
    Abstract
    Abstract: Negotiation is a fundamental aspect of human interaction. With recent advancements in chatbots, leveraging artificial intelligence for negotiation has emerged as an ideal application. Despite significant progress in English negotiation bots, such advancements are notably absent in Arabic. Furthermore, while previous research has focused on developing high-performing neural response generation systems for negotiation bots, the integration of multi-modality remains unexplored. This work presents the first Arabic multi-modal negotiation bot presented by a seller agent capable of engaging in negotiations with buyers in the context of item sales. This seller agent is designed to understand the buyer's Arabic utterances and to interpret the negotiation context through images provided by the buyer. To achieve this, we fine-tuned a generative pre-trained transformer (GPT-2) model on an Arabic dataset, integrating it with reinforcement learning for more coherent and persuasive responses, and a convolutional neural network to support multimodality. To evaluate our model, we relied on both automatic evaluation using established metrics such as cross-entropy loss and the BLEU score, as well as human evaluation in terms of fluency, consistency and persuasion. Our evaluation results reveal both the successes and limitations of the designed multi-modal Arabic negotiation bot, offering insights into the inherent challenges and setting directions for future research.
    PDF Code
  • Decoding Style Efficient Fine-Tuning of LLMs for Image-Guided Outfit Recommendation with Preference Feedback
    Najmeh Forouzandehmehr, Nima Farrokhsiar, Ramin Giahi, Evren Korpeoglu and Kannan Achan
    Abstract
    Abstract: Personalized outfit recommendation remains a complex challenge, demanding both fashion compatibility understanding and trend awareness. This paper presents a novel framework that harnesses the expressive power of large language models (LLMs) for this task, mitigating their "black box" and static nature through fine-tuning and direct feedback integration. We bridge the item visual-textual gap in items descriptions by employing image captioning with a Multimodal Large Language Model (MLLM). This enables the LLM to extract style and color characteristics from human-curated fash- ion images, forming the basis for personalized recommendations. The LLM is efficiently fine-tuned on the open-source Polyvore dataset of curated fashion images, optimizing its ability to recom- mend stylish outfits. A direct preference mechanism using negative examples is employed to enhance the LLM’s decision-making pro- cess. This creates a self-enhancing AI feedback loop that continu-ously refines recommendations in line with seasonal fashion trends. Our framework is evaluated on the Polyvore dataset, demonstrating its effectiveness in two key tasks fill-in-the-blank, and complemen-tary item retrieval. These evaluations underline the framework’sability to generate stylish, trend-aligned outfit suggestions, contin-uously improving through direct feedback. The evaluation results demonstrated that our proposed framework significantly outper- forms the base LLM, creating more cohesive outfits. The improved performance in these tasks underscores the proposed framework’s potential to enhance the shopping experience with accurate sug- gestions, proving its effectiveness over the vanilla LLM based outfit generation.
    PDF Code
  • Cross-Modal Zero-Shot Product Attribute Value Generation
    Jiaying Gong, Ming Cheng, Hongda Shen, Pierre-Yves Vandenbussche and Hoda Eldardiry
    Abstract
    Abstract: Existing zero-shot product attribute value (aspect) extraction aims at using open-mining, graph, or large language models to predict unseen product attribute values. These approaches rely on uni-modal or multi-modal models, where the sellers should provide detailed textual inputs (product descriptions) for the products. However, manually providing (typing) the product descriptions is time-consuming and frustrating for the users. Thus, we propose a cross-modal zero-shot attribute value generation framework (ViOC-AG) based on CLIP, which only requires product images as the inputs. In other words, users only need to take photos of the products they want to sell to generate unseen attribute values. ViOC-AG follows a text-only training process, where a task-customized text decoder with a projection layer is trained with the frozen CLIP text encoder to alleviate the modality gap and task disconnection. During the zero-shot inference, product aspects are generated by the frozen CLIP image encoder connected with the trained task-customized text decoder. OCR tokens and outputs from a frozen prompt-based LLM correct the decoded outputs for out-of-domain attribute values. Extensive experiments with ablation studies conducted on the public dataset MAVE demonstrate that our proposed model significantly outperforms other fine-tuned vision-language models for zero-shot attribute value generation.
    PDF Code
  • Learning variant product relationship and variation attributes from e-commerce website structures
    Pedro Herrero-Vidal, You-Lin Chen, Cris Liu, Prithviraj Sen and Lichao Wang
    Abstract
    Abstract: We introduce VARM, variant relationship matcher model, to identify pairs of variant products in e-commerce catalogs. Traditional definitions of entity resolution are concern with whether two mentions refer to the same underlying product. However, this fails to capture product relationships that are critical for e-commerce applications, such as listing similar, but not identical, products on the same webpage or review sharing. Here, we formulate a new type of entity resolution in variant product relationships. In contrast with the traditional definition, the new definition requires both identifying if two products are variants match of each other and what are the attributes that vary between them. To overcome these challenges, we developed a model that leverages the strengths of both encoding and generative AI models. First, we construct a dataset that captures webpage product links, and therefore variant product relationships, to train an encoding LLM to predict variant matches for any given pair of products. Second, we use RAG prompted generative LLMs to extract variant and common attributes amongst groups of variant products. To validate our strategy, we evaluated model performance using real data from one of the world's leading e-commerce retailers. The results showed that our model outperforms alternative solutions and paves the way to exploiting these new type of product relationships.
    PDF Code
  • Towards More Relevant Product Search Ranking Via Large Language Models An Empirical Study
    Qi Liu, Atul Singh, Jingbo Liu, Cun Mu and Zheng Yan
    Abstract
    Abstract: Training Learning-to-Rank models for e-commerce product search ranking can be challenging due to the lack of gold standard of ranking relevance. In this paper, we decompose ranking relevance into content-based and engagement-based aspects, and we propose to leverage Large Language Models (LLMs) for both label and feature generation in model training, primarily aiming to improve the models predictive capability for content-based relevance. Additionally, we introduce different sigmoid transformations on the LLM outputs to polarize relevance scores, enhancing the model's ability to balance between content-based and engagement-based relevances and thus prioritize highly relevant items overall. Comprehensive online tests and offline evaluations are also conducted for the proposed design. Our work sheds light on advanced strategies for integrating Language Models into e-commerce product search ranking model training, offering a pathway to more effective and balanced models with improved ranking relevance.
    PDF Code
  • Hierarchical Knowledge Graph Construction from Images for Scalable E-Commerce
    Zhantao Yang, Han Zhang, Fangyi Chen, Anudeepsekhar Bolimera and Marios Savvides
    Abstract
    Abstract: Knowledge Graph (KG) is playing an increasingly important role in various AI systems. For e-commerce, an efficient and low-cost automated knowledge graph construction method is the foundation of enabling various successful downstream applications. In this paper, we propose a novel method for constructing structured product knowledge graphs from raw product images. The method cooperatively leverages recent advances in the vision-language model (VLM) and large language model (LLM), fully automating the process and allowing timely graph updates. We also present a human-annotated e-commerce product dataset for benchmarking product property extraction in knowledge graph construction. Our method outperforms our baseline in all metrics and evaluated properties, demonstrating its effectiveness and bright usage potential.
    PDF Code
  • PAE LLM-based Product Attribute Extraction for E-Commerce Fashion Trends
    Apurva Sinha and Ekta Gujral
    Abstract
    Abstract: Product attribute extraction is a growing field in e-commerce business, with several applications including product ranking, product recommendation, future assortment planning and improving online shopping customer experiences. Understanding the customer needs is critical part of online business, specifically fashion products. Re- tailers use assortment planning to determine the mix of products to offer in each store and channel, stay responsive to market dynamics and to manage inventory and catalogs. The goal is to offer the right styles, in the right sizes and colors, through the right channels to fostering customer loyalty. In this paper we present PAE , a product attribute extraction algorithm for future trend reports consisting text and images in PDF format. Most existing methods focus on attribute extraction from titles or product descriptions or utilize visual information from existing product images. Compared to the prior works, our work focuses on attribute extraction from PDF files where upcoming fashion trends are explained. Our contributions are three-fold (a) We develop PAE, an efficient framework to extract attributes from unstructured data (text and images); (b) We provide catalog matching methodology based on BERT representations to discover the existing attributes using upcoming attribute values; (c) We conduct extensive experiments with several baselines and show that PAE is an effective, flexible and on par or superior (avg 92.5% F1-Score) framework to existing state-of-the-art for attribute value extraction task.
    PDF Code
  • LLM-Modulo-Rec Leveraging Approximate World-Knowledge of LLMs to Improve eCommerce Search Ranking Under Data Paucity
    Ali El Sayed, Sathappan Muthiah and Nikhil Muralidhar
    Abstract
    Abstract: Effective ranking of products relevant to a user's query and interest is the main goal of e-commerce product ranking. In this context, ranking irrelevant products or those mismatched with the intent of the user query results in sub-optimal user experience. Providing high-quality, relevant search rankings requires large labelled datasets for training powerful deep learning (DL) based ranking pipelines. However, such large datasets are costly and time-consuming to obtain. Another important facet that influences search ranking quality is the intent and ambiguity in the user's search query. Hence, data paucity and query ambiguity are two ever-present challenges impeding the success of modern deep learning (DL) based e-commerce ranking models. In this work, we present the first ever investigation of employing large-language models (LLMs) as approximate knowledge sources to counter these challenges and improve the performance of off-the-shelf ranking models, under data paucity and query ambiguity. Specifically, we undertake the first ever investigation of developing an LLM-Modulo method to improve the search ranking performance of off-the-shelf ranking models. Our experiments demonstrate notable performance improvements in ranking quality of these off-the-shelf models, when employed in an LLM-Modulo manner.
    PDF Code
  • ReScorer An Aggregation and Alignment Technique for Building Trust into LLM Reasons
    Brian de Silva, Jay Mohta, Sugumar Murugesan, Dantong Liu, Yan Xu and Mingwei Shen
    Abstract
    Abstract: Large language models (LLMs) offer substantial potential for automating labeling tasks, showcasing robust zero-shot performance across diverse classification tasks. The LLM-generated reasons that accompany these classifications contain signals about the quality of the classifications. Estimates of quality of these reasons can, in essence, be used to detect potentially incorrect predictions. Conventional metrics for scoring reasons such as ROUGE-L and BLEU scores depend on ground truth reference reasons, which are challenging and expensive to acquire, and are not available at inference time for new examples. In this paper, we use a product classification dataset to evaluate two reasoning scoring strategies that do not rely on reference reasons, one involving an LLM-based scorer and another using recently proposed ROSCOE metrics. Our analysis reveals that LLM-based approaches are computationally intensive, while aligning ROSCOE metrics with human judgment presents challenges. Consequently, we propose an extension to the ROSCOE framework called ReScorer, which achieves 7% better alignment with human judgment compared to LLM-based evaluation and 59% better than ROSCOE, while being 89% cheaper compared to LLM-based scoring.
    PDF Code
  • Label with Confidence Effective Confidence Calibration and Ensembles in LLM-Powered Classification
    Karen Hovsepian, Dantong Liu and Sugumar Murugesan
    Abstract
    Abstract: Large Language Models (LLMs) have been employed as crowdsourced annotators to alleviate the burden of human labeling. However, the broader adoption of LLM-based automated labeling systems encounters two main challenges, 1) LLMs are prone to producing unexpected and unreliable predictions, and 2) no single LLM excels at all labeling tasks. To address these challenges, we first develop fast and effective logit-based confidence score calibration pipelines, aiming to leverage calibrated LLM confidence score to accurately estimate the LLM’s level of confidence. We propose novel calibration error based sampling strategy to efficiently select labeled data for calibration, leading to a reduction of calibration error by 46\%, compared with uncalibrated scores. Leveraging calibrated confidence scores, we then design a cost-aware cascading LLM ensemble policy which achieves improved accuracy, while reducing inference cost by more than 2 times compared with the conventional weighted majority voting ensemble policy.
    PDF Code
  • E-Commerce Product Categorization with LLM-based Dual-Expert Classification Paradigm
    Zhu Cheng, Wen Zhang, Chih-Chi Chou, You-Yi Jau, Archita Pathak, Peng Gao and Umit Batur
    Abstract
    Abstract: Accurate product categorization in e-commerce is critical for delivering a satisfactory online shopping experience to customers. With the vast number of products available and the numerous potential categories, it becomes crucial to develop a classification system capable of assigning products to their correct categories with high accuracy. We present a novel dual-expert classification system that utilizes the power of large language models (LLMs). This framework integrates domain-specific knowledge and pre-trained LLM’s general knowledge through effective model fine-tuning and prompting techniques. First, the fine-tuned domain-specific expert recommends top K candidate categories for a given input product. Then, the more general LLM-based expert, through prompting techniques, analyzes the nuanced differences between candidate categories and selects the most suitable target category. Experiments on e-commerce datasets demonstrate the effectiveness of our LLM-based Dual-Expert classification system.
    PDF Code

Organizers

Mansi Mane

Mansi Mane
Walmart Global Tech

Bio
Bio: Mansi Mane is Staff Data Scientist at Search and Recommendation team in Walamrt Labs. She completed her Masters from Carnegie Mellon University in 2018. She currently focuses on research and development of machine learning algorithms for recommendations, search, marketing as well as content generation. Mansi was previously Applied Scientist at AWS where she lead efforts for training of billion scale large language models from scratch. Her research interests include machine learning, multimodal LLMs pretraining, fine-tuning as well as in-context learning.
Djordje Gligorijevic

Djordje Gligorijevic
eBay

Bio
Bio: Djordje Gligorijevic is applied sciences manager at eBay, leading allocation and pricing team in eBay’s sponsored search program. Prior to eBay Djordje worked as Research Scientist in Yahoo Research. He received the Ph.D. degree from Temple University, Philadelphia, PA, in 2018. His research interests include Machine Learning, Extreme Multi-Label Classification, NLP, LLMs, and the Integration of Qualitative Knowledge into predictive models with applications in domains of Computational Healthcare, Computational Advertising, Search, Ranking, and Recommendation Systems. Djordje has published at international conferences such as AAAI, KDD, TheWebConf, SDM, CIKM, SIGIR, as well as in international journals like Data Mining and Knowledge Discovery, BigData journal where he serves as associate editor, Methods and Nature’s Scientific Reports.
Dingxian Wang

Dingxian Wang
Upwork

Bio
Bio: Dingxian is a Applied Science leader with around 10 years industry experience at the intersection of machine learning, software engineering, applied science, and product development. He is passionate about applying skills to solving real-world problems, especially in the field of technology and data science. He is currently leading a team focus on the ranking, personalization and recommendation in the search area. Throughout the career, Dingxian has been involved in a wide range of areas, including search engine, query understanding, recall system, ranking system, recommender system, marketing science, personalization, information extraction, knowledge graph etc. With massive proven track records of delivering great business results, and drove hundreds of millions of dollars in GMV and revenue growth. Dingxian has received many top honors and awards ranging from top conference, journals, patents to top research projects as well as internal competition awards. Including 20+ papers on top conference and journals (one best paper candidate of CIKM 2021), 9 US patents, over 1500 citations, ICT Research Project of the Year 2021 of ACS (Australian Computer Society) and eBay Leaders’ Choice Award.
Behzad Shahrasbi

Behzad Shahrasbi
Amazon

Bio
Bio: Behzad Shahrasbi is Manager of Applied Science at Amazon. His team leads the efforts to proactively detect and prevent catalog abuse and protect brands at scale. His career also includes significant tenures at Walmart, SmartDrive Systems, and Nokia. Behzad research interests span NLP, Mulitimodal Representations, Recommender Systems, and Privacy-Preserving ML. Post completeion of his PhD, Behzad has published in several journals and conferences, including ICML, IEEE Trans. on Big Data, ICCV, and ICASP.
Topojoy Biswas

Topojoy Biswas
Walmart Global Tech

Bio
Bio: Topojoy Biswas is Distinguished Data Scientist at Walmart Labs. At Walmart he leads efforts related to W+ membership models and creative generation projects. Prior to Walmart he worked as Principal Engineer at Yahoo Research where he worked on information extraction on text and videos in Yahoo Knowledge Graph which powers search and information organization in products in Yahoo, like Finance, Sports, entity search and browse. Before Yahoo Knowledge graphs, he worked for Yahoo shopping on attribute extraction and classification of shopping feeds into large taxonomies of products. Topojoy has published in multiple international conferences such as ICIP, ACM Multimedia etc and has spoken on applied machine learning topics in MLConf, KGC etc.
Evren Korpeoglu

Evren Korpeoglu
Walmart Global Tech

Bio
Bio: Evren Korpeoglu is a Director of Data Science at Personalization and Recommendations team in Walmart Global Tech. At Walmart he leads efforts related to whole page optimization, item recommendations as well as using Generative AI based models for recommendations. He completed his Ph.D. from Bikent University. He has published at international conferences like NeurIPS, ICML, SIGKDD.
Marios Savvides

Marios Savvides
CMU, UltronAI

Bio
Bio: Professor Marios Savvides is the Bossa Nova Robotics Professor of Artificial Intelligence at Carnegie Mellon University and is also the Founder and Director of the Biometrics Center at Carnegie Mellon University and a Full Tenured Professor in the Electrical and Computer Engineering Department. He received his Bachelor of Engineering in Microelectronics Systems Engineering from University of Manchester Institute of Science and Technology in 1997 in the United Kingdom, his Master of Science in Robotics from the Robotics Institute in 2000 and his PhD from the Electrical and Computer Engineering department at CMU in 2004. He has authored and co-authored over 250 journal and conference publications, including 22 book chapters and served as the area editor of the Springer’s Encyclopedia of Biometrics. Some of his notable accomplishments include developing a 40ft stand-off distance iris recognition system, robust face detection even in presence of extreme occlusions, a fully autonomous AI inventory robotic image analytics system for detecting out-of-stocks that he and his team scaled to 550 walmart retail stores. His latest research is in large foundation vision models for zero shot enrollment for robust object recognition which has spun out as the enterprise software company UltronAI, Inc. His work was presented at the World Economic Forum (WEF) in Davos, Switzerland in January 2018 and his research has been featured in over 100 news media articles (CNN, CBS 60 minutes, Scientific American, Popular Mechanics etc). He is the recipient of CMU’s 2009 Carnegie Institute of Technology (CIT) Outstanding Research Award, the Gold Award in the 2015 Edison Awards in Applied Technologies for his biometrics work, 2018 Global Pittsburgh Immigrant Entrepreneur Award in Technological Innovation, the 2020 Artificial Intelligence Excellence Award in “Theory of Mind”, the Gold Award in 2020 Edison Awards for Retail Innovations on Autonomous Data Capture and Analysis of On-Shelf Inventory, the “Stevens J. Fenves Award for Systems Research”, the “2020 Outstanding Contributor to AI” award from the US Secretary of the Army Mr. Ryan McCarthy, named the “Inventor of Year” by the Pittsburgh Intellectual Property Law Association (PIPLA), 2022.

Program Committee

  • ABC (XYZ University)