Created Another Robot (Omnidirectional Wheels, Movable Body, Dual Arms)

DETAILS ON MY NEWEST ROBOT

Hey everyone, my latest robot (in the image below), is up and running. It has some very interesting features. I’ve got the typical tablet as the head, but this is the first robot that has used omnidirectional wheels. These are just fantastic and allow movement in any direction without the need to turn, it also allows complete 360o rotation in place. I’ve also deviated from trying to get bipedal walking robots working as I’ve decided that as cool as it is to have a robot that mimics humans better, the stability that the walking mobility provides versus rugged omnidirectional wheels is just not worth it, as the walking is brittle and unbalanced, at best. Also, the new design carries a lot of weight.

Another interesting feature are the arms. There are two arms, one on each side, but the chest section can move up or down and can either take both the arms with it, or can stay at any location on its track and the arms can move independently of each other and of the chest. This allows things such as the chest in the middle of the track, the left arm at the top and the right arm at the bottom. The robot is carrying, and powering, a mini gaming desktop with an NVidia RTX2060 in it. It is certainly powerful enough to run neural networks and a variety of other types of programs. The computer is held in a backpack, worn on the robot’s back.

NEXT STEPS

I’ve just updated this section recently, long after the release of this robot. This design has been evolved for simplicity, and to be compatible with my capability for machine learning in virtual reality to be uploaded into physical robots to operate in physical space. This basically allows robots to learn in virtual reality, and then take what they’ve learned and use those actions in the physical world. Therefore, the next steps are to continue to evolve the new model (Seen below) and to continue to advance it’s integration with machine learning in virtual reality environments that match the physical world. I think of it as when the robot needs to compute it’s actions it uses an “AI Imagination” (AI Imagination is another feature I’ve recently created), a feature that allows his potential actions and outcomes to be processed using machine learning in multiple, parallel, instances, that are copies of the robot’s physical environment. Then, in each of these copies, the robot can execute each potential action, or set of actions, and when the optimal action has been discovered, the information needed to take that action is uploaded into the robot’s capabilities from the virtual world. In summary, it’s thinking is happening in a thought bubble that is the virtual world, just like our thinking about things happens in little thought bubbles where we simulate our environment and potential actions and eventually decide which action to take.

My Latest Robot as of Early 2022

Thanks for reading,

Trevor E. Chandler

Leave a Reply