The smart environments of our future ubiquitous computing era need to deliver their users with appropriate interfaces that go beyond today’s standard interaction modalities. In this context, gestural interfaces represent a viable solution to deliver new levels of experience to the users of these scenarios. Although gestural interfaces are now common for mobile devices and video games in the form of touch, accelerated motion, and whole body movements, interacting with gestures outdoors in new, smart environments is still problematic. Moreover, each day we see more applications being deployed in outdoor environments that expose gestural interfaces to their users, such as touch screens and interactive floors and ceilings installed in public places. This project addresses the current hot topic of designing gestural interfaces for smart environments (i.e., public ambient displays) and for new miniaturized wearables (e.g., smart watches) that constitute the interactive targets of the ubiquitous computing era. To this end, we investigate feedback modalities for the users of such new environments in order to support gesture-based interaction. For example, visual feedback supplied on the mobile phone through augmented reality browsing can help discover gesture commands (and help advance on the currently unsolved “invisibility” problem of gestures); audio feedback can inform on the successfulness of gesture articulation (contributing thus to an effective user experience in new environments); and vibrotactile feedback can guide the articulation process of the gesture (and help training procedures for new environments, a problem never explored before). With this research, we attempt to add new knowledge on the topic of designing gestural interfaces by implementing and evaluating feedback modalities to support gestural interaction in new, smart environments.