what PC specs are commonly needed for photogrammetry in Blender?
Drekkan
Posts: 459
I'm looking into Photogrammetry as a newbie on Blender as a hobbyist but I don't really know what is required technology/system wise. Can anyone who is familiar and experienced with it please answer the questions below? thanks :)
- what camera would a person need?
- what specs/graphics card on your PC is required?
thanks.
Comments
photogrammetry specs range from entry-level equipment like using a smartphone, to having a good camera, to having a camera rig with polarised filters and turntable (for scanning small objects), or a dedicated 3D scanner device like einscan, to like a $100,000 multicamera array used for creating movie quality digital doubles.
for computer specs, I cant imagine it takes much to run photogrammetry software. Scanning software like the einscan, revopoint, and phone-based software seem to be able to run on anything. Most hobbyist photogrammetry people are using stuff like meshroom. So maybe google meshroom required specs. I doubt it's very intensive to use meshroom, most people with a machine for rendering probably can use it.
As stated, Meshroom gets you up and running fastest on all types of antiquated hardware.
I looked at lidar devices, more for perimeter security in my yard (I know! What are those people doing sculking about in my yard!) but lidars are commonly found on higher end smart phones, like some Samsung Galaxy and Apple iPhones, to use in biometric identification for unlocking the phone for the owner. They are also used in the highend automated vehicle collision avoidance systems. They were going for aboyt $300 and higher on Amazon. The manufacturer was Intel for the ones I was looking at. Anyway, another dual use reason why I wanted was fast lidar scanning to make models of various things like tree species leaf and bark, insects, and so on.
If you buy the lidar device instead of a highend Samsung Galaxy or Apple iPhone, be sure you investigate existing software and platform support for the device. OSS support will be non-existant or sparse. If you are a programmer and want to make bespoke support, that is definately a reason why an Intel lidar is going to be easier for you to write support for.
Introducing the Intel® RealSense™ D400 Product Family – Intel® RealSense™ Depth and Tracking Cameras (intelrealsense.com)
To get up and running most easily it probably will be easier to use a smart phone with lidar after researching what apps in their app stores that can use it for your purposes.
The subject is something you should research yourself and determine which existing solutions best suite your desired niche. A photometric 3D model derived from regular photos & videos is what Face Transfer, FaceGen Artist Pro, and most AI photo & video Deep Fakes use, so that is definately a more flexible and fast advancing field of work. Example, I have installed Stable Diffusion on my PC with an nVidia GeForce 4070 and it generates from text descriptions a 512x512px images in less than a minute. You have the overhead of teaching yourself the AI engine language though. Let's say the documenation is very dry reading. You'd have to learn to train the Stable Diffusion engine locally on photos you took, say of the leaves, bark, flowers, and roos of different species of trees, or whatever. You could.