ABOUT ME
NADAV ZAMIR
Head of Algorithms R&D at Alibaba Machine Intelligence Lab
Executive in the AI-tech with passion for developing data-driven solutions for hard problems that create business impact
Hello, my name is Nadav Zamir, and I’m an Electrical Engineer from Raanana, Israel. I hold a BSc in Electrical Engineering from the Technion – Israel Institute of Technology and have taken selected courses from the MSc Electrical Engineering program at Tel Aviv University, focusing on Computer Vision, Machine Learning and Deep Learning.
I currently work at Alibaba Machine Intelligence Lab, DAMO academy, as the Head of Algorithms R&D. My experience includes setting strategy, researching and delivering state-of-the-art machine learning and deep learning technologies that answer business needs and are cost-effective at scale.
My past jobs include galigu, a startup I founded in the virtual-reality domain where I served as Chief Technology Officer as well as Intel’s Innovation Lab where I was a team lead, path-finding & prototyping future technologies.
- birthdate : 30 August
- email : Nadav.Zamir1 (at) gmail.com
- : https://twitter.com/NadavZamir1
- : https://linkedin.com/in/NadavZamir
MY RESUME
- Experience
-
Alibaba Israel Algorithms Group Lead
Alibaba Group
Leading Alibaba Israel's algorithms group, with multiple teams and 17 members, developing various AI-driven technologies for Alibaba Cloud Drive (阿里云盘). Among many technologies, we work on image and video understanding and running AI cost-efficiently at…
Apr 2020 - Aug 2021 -
AutoML Algorithms Team Lead
Alibaba Group
Leading the development of AutoML product for vision problems that automates the AI development process from data to AI-in-production. Given a customer dataset, our platform automatically preprocesses the data, searches for a network architecture (NAS)…
Feb 2019 - Apr 2020 -
Deep Learning Applied Researcher
Alibaba Group
Delivering innovative solutions to Alibaba Group's different business units leveraging state of the art Machine Learning and Deep Learning technologies
Mar. 2018 - Feb 2019 -
Co-Founder & CTO
galigu LTD
galigu is a tech startup developing virtual reality search platform in the VR environment. With galigu users can find desired VR content using natural interface (voice or keyboard) with advanced search capabilities.
Dec. 2016 - Jan. 2018 -
Sr. Algorithm and Software Engineer
Perceptual Computing Dept., Intel Corp.
Leading the Device Innovations and 3D-Enhanced VR domains. Exploring future directions by demonstrating compelling usages, their viability and business traction. Wrote several patents, and pushed multiple products which were presented by Intel's top level management.
Sept. 2015 – Dec. 2016 -
Algorithm and Software Engineer
Perceptual Computing Dept., Intel Corp.
Part of the Path finding & Prototyping team in the Advanced Technologies Group. Created innovative user experiences using computer vision, computer graphics & human-computer interactions.
Feb. 2013 – Sept. 2015 -
Hardware Verification Engineer
Intel Corp.
Establishment and maintenance of verification environment, experienced in test engines development and reference models. Extensive training seminars in UNIX, Perl, System Verilog and Specman.
Oct. 2009 – Feb. 2013 - education
-
Master Degree of Electrical Engineering
Tel Aviv University
Selected Courses from the MSc. Electrical Engineering Program with emphasis on Computer Vision, Machine Learning and Deep Learning
Oct. 2013 – Feb. 2016 -
Bachelor Degree of Electrical Engineering
Technion – Israel Institute of Technology
Majored In Computers, Computers Security and Digital Signal Processing
Oct. 2007 – Aug. 2010
MY PATENTS
Wellness Mirror
Issued Oct 19, 2017 • US15280466
Various systems and methods for providing a wellness mirror are provided herein. A system for providing a wellness mirror includes a display; a modeler to receive depth images from a depth camera that is communicatively coupled to the system, and provide a model of a subject in the depth images; a health profiler to analyze the model and produce a health and wellness analysis; and a user interface to present the health and wellness analysis on the display.
Interactive Adaptive Narrative Presentation
Issued Mar 3, 2017 • US20170092322
A narrative presentation system may include at least one optical sensor capable of detecting objects added to the field-of-view of the at least one optical sensor. Using data contained in signals received from the at least one optical sensor, an adaptive narrative presentation circuit identifies an object added to the field-of-view and identifies an aspect of a narrative presentation logically associated with the identified object. The adaptive narrative presentation circuit modifies the aspect of the narrative presentation identified as logically associated with the identified object.
Method And System Of 3D Image Capture With Dynamic Cameras
Issued Sep 13, 2016 • US14866523/20170094259
A number of different applications use first person or point of view (POV) image capturing to enable a user or a viewer to modify the perspective of an object being viewed. Examples of these applications may include first person video such as the video capturing of a sports or entertainment event where it is possible to rotate or change the camera view and interpolate or otherwise generate a video from a new viewpoint that was not originally provided to capture video. When the objects being recorded are static, a moving camera may be used and moved around the object to capture images from multiple perspectives. When the objects to be recorded are moving, however, either multiple cameras must be mounted to the object which is not always possible, or an array of cameras surrounds the entire area where the objects may move thereby limiting the motion of the object which is often undesirable. The array of cameras is typically a static array of cameras that is costly and labor intensive to setup and synchronize.
Media content including a perceptual property and/or a contextual property
Issued Aug 21, 2013 • US2015/0058764A1
Apparatuses, systems, media and/or methods may involve creating content. A property component may be added to a media object to impart one or more of a perceptual property or a contextual property to the media object. The property component may be added responsive to an operation by a user that is independent of a direct access by the user to computer source code. An event corresponding to the property component may be mapped with an action for the media object. The event may be mapped with the action responsive to an operation by a user that is independent of a direct access by the user to computer source code. A graphical user interface may be rendered to create the content. In addition, the media object may be modified based on the action in response to the event when content created including the media object is utilized.