ProfessionResearchCase-Study

Er. MAHESH BOGATI


Electronics, Communication and Information Technology Engineer, (IOE, 2019-2024)

Reinforcement Learning Research Aspirant (Phd/Master)

I'm dedicating the decade (2025-2036) to pursue Reinforcement Learning (RL) research and making contributions — a vision that machines can learn like humans. After a decade, I will try to harvest those findings and principles on quantum devices to build the sense of superintelligence.

Research

SOCKETING REINFORCEMENT LEARNING INTO A UNITY GAME ENVIRONMENT: THE PUSH-BOX GAME

DOI: N/A | Published: N/A

Its simple, the project will have environment developed in unity for agent (BOX). The unity and RL (Reinforcement Learning) are glued by websocket/python

Solo Research By:Er. Mahesh Bogati

web-socket/TCP

Unity

Reinforcement Learning

HOW COULD WE TELL IF A MACHINE HAS LEARNED?

DOI: N/A | Published: N/A

We can do simply 2+2=4, but actually our BRAIN learned it by itself without teaching us all neurological (maybe millions of process) immensely complex processes involved in it to perceive and eventually just fool us with a sense that we learnt it. A human learns something, senses it naturally, applies it, tests it, stores it, and the amazing part is sometimes forgets it, sometime remembers it back. Our Brain blends observed and unobserved things incredibly, integrating observed (e.g., visual, audio, symbolic knowledge, etc.) and unobserved (e.g., intent, common-sense, feelings, etc) information to make learning happened in us. Now, Let me tell if a machine has learned something?

Solo Research By:Er. Mahesh Bogati

Reinforcement Learning

VOICE CONTROLLED ROBOT WITH COMPUTER VISION AND ROBOTIC ARM

DOI: 10.5281/zenodo.15811734 | Published: Fri Apr 19 2024

The proposed project aims to create a sophisticated Voice-Controlled Robot with Computer Vision and Robotic Arm, using the latest technologies in voice recognition, computer vision, and robotics. It uses computer vision to detect an object, ultrasonic sensor to find the distance and robotic arm to pick up that object. The main goal is to build a versatile and user-friendly robot that can understand spoken commands through speech to text conversion, move around independently using computer vision, and perform complex tasks accurately using its robotic arm. This combination of smart features aims to provide a smooth and flexible robot that can be used in different areas based on user requirements.

Project Research By:Er. Mahesh BogatiEr. Sangram BhandariEr. Sarthak ChaudaryAnkita Rai Khaling

Electronics

Communication

Robotics

Inverse Kinematics

Reinforcement Learning

Deep Learning

SERVO DRIVEN FINGER GESTURE CONTROL BASED ON COMPUTER VISION

DOI: N/A | Published: Fri Jul 28 2023

Based on the computer vision and Arduino broad control, the program is automatic after execution; first it decides either the palm is detected or not, if detected the actor is good to go for showing gesture regarding fingers only and then physical palm will mimic with respect to actor palm else it will terminate the model and reopen the webcam screen. The looping will work till we quit it manually. The aim of this project is to show the various wonderful application possibilities of computer vision combined with robotics. This project will use computer vision techniques and some essential python libraries to process the image of the palm and mimic the movement of the fingers with the provided degree of freedom to the appendage.

Project Research By:Er. Mahesh BogatiEr. Sangram BhandariEr. Sarthak Chaudary

Electronics

Communication

Robotics

Computer Vision

Signal Analysis

Engineering Since 2018

Life will not be cool always, so you have to be cool.