I just like the XR Fingers visualizer, however I am questioning if I can simply use the mesh from the arms as a set off after they collide with an object. Is that this potential? I attempted it with out success. I simply need logic to set off if a hand is “touching” an object at any level. I’ve a sense that the mesh inside these arms is only for visualization. In that case, perhaps I might use the bone/joint construction as a collider?
That is the script I wish to connect to some easy dice sport objects, and provides them field colliders and inflexible our bodies, and have them turn out to be kids of a hand as soon as touched. If this works, then I might add an if-statement to say, if the hand is in a grip-pose. I wish to keep away from utilizing the XR Interplay pre-made actions due to some customized features I bear in mind.
utilizing UnityEngine;
public class ChildOnCollision : MonoBehaviour
{
personal void OnTriggerEnter(Collider different)
{
// Examine if the opposite object has the tag "Seize"
if (different.CompareTag("Seize"))
{
// Make the opposite object a baby of this object's remodel
remodel.SetParent(different.remodel);
// Optionally, reset the native place and rotation
remodel.localPosition = Vector3.zero;
remodel.localRotation = Quaternion.id;
}
}
}