Human-like manipulation for robots has been an ongoing research effort for decades. Among such efforts, stable pick-and-place is one of the simplest yet impactful tasks. The ability to perform stable pick-and-place tasks, especially with unseen object in cluttered environments, would enable robots to handle common household object and eventually perform more complex tasks in kitchens, labs and offices. This thesis presents a novel grasp analyzer to identify stable sliding and lifting grasp poses for parallel grippers. This grasp analyzer combines learning-based models with physics-based models to ensure performance and robustness. Combined with a multi-modal planner, the grasp analyzer was proved with experiments to work both in simulation and real world.