The LiDAR sensor in the iPad Pro (or iPhone 12/13 Pro/Max) only has an approximate 5-meter range. So, while standing back to capture the whole scene can be helpful, you gain the benefit of both Lidar and photogrammetry from video when you are within the 5 meter range. Objects at greater distances that get reconstructed are only from photogrammetry.
The best scanning is when you can easily move around an object like in a closed loop with a significant amount of overlap.
Avoid rotations in place - It's easy to stand still and look around, but this doesn't give good 3D results. Instead, keep your feet moving, and when you need to turn in a new direction, make sure that it’s an arc (back up and then arc if needed).
Avoid motion blur in the video - In indoor or dimly lit environments it can be easy to have motion blur in the video (as the camera's shutter is open for longer periods of time). Move a little slower and avoid shaking or moving the phone too quickly.
Try to keep diverse scene elements in view - The 3D reconstruction works best when there are a variety of unique, high-contrast objects in the scene. Avoid continually filling the camera’s field-of-view with entirely white walls, looking straight at the sky, etc.
Maintain strong connectivity between near and far views - When scanning a scene, we usually get the most complete results when you keep your distance from what you're scanning consistent.
To add detail to a reconstruction, you can get closer to various parts, but you need to make sure that there is some unique object/texture in view in order for the image tracking to latch onto. By combining near and far views, you can try to get the best of both worlds: the far views of the whole scene help tie everything together and provide a nice skeleton of connectivity for the reconstruction, and the near views can add detail (as long as those near views are easily distinguished and share similar content with the far views).
Variety of viewing angles - When scanning an object or scene, it's best to view it from a variety of overlapping angles and distances. You want to maintain strong connectivity between those different views, but having a large variety is important to generate complete and accurate results. This is especially important for flat objects like the ground or a wall, as only scanning a flat surface from an oblique angle will result in artifacts in the reconstruction.
Targets - When trying to get the most accurate point cloud data, targets are helpful. Print out two copies of the scaling target and place them around the object to be scanned, as far apart as possible. The larger the scale, the better! When scanning, make sure to pass by the targets and get close enough that you can make out all the small black and white squares. When captured, you will see them highlighted in the video with a green square. Once completed processing, the point cloud is scaled exactly to the reference dimension entered prior to scanning.
Repeating Patterns - When scanning areas that have repeating patterns like a tiled floor or a brick wall, it's very easy for the reconstruction to be confused and lose tracking. The resultant point cloud data may show "ghosting effects" or have misaligned sections. Two things that can be tried here are to move farther back with a wider field of view so that you are capturing additional unique features of the scene. Alternatively, placing objects in the scene that provide uniqueness in the areas of repeating patterns is helpful.