WOMBO is building the happiest place on the internet. Through the latest techniques in AI and a sprinkle of magic, we power next generation media to make people laugh and smile.
We have 50M+ downloads across 180+ countries. Help put a smile on a billion faces.
We are the fastest growing consumer app in Canadian history. Join a rocket ship 🚀
We are pushing the boundaries of what is possible with AIÂ and Synthetic Media.
As we continue growing our team, the lead is expected to technically manage 2-3 Machine Learning Engineers at a mix of levels.
You will be empowered to make independent decisions for the team and are expected to partner effectively with all areas of the business.
To be successful in this role, you must possess the following:
‍
‍
WOMBO launched on the app store on February 28th. We've crossed 50 million downloads in less than 3 months, making us the fastest growing consumer app in Canadian history.
This is just the beginning.
We have several incredible features on our roadmap and are happy to share details during interviews.
Most of our team is located in Toronto. We ideally want new hires to be based out of or willing to relocate to Toronto. However, we are ok with remote employment for top talent.
Currently we are only hiring remote employees in U.S. & Canada due to timezone convenience.
We’ll likely do a quick 30-minute phone call just to make sure we’re on the same page. After that, we’d love to work on a process together that feels fair.
Generally, there are two options:
‍
1. Our preferred situation is that we get to work together for several weeks on a project (of course, we’d pay you whatever you think is fair). Often working together for a bit is a great way to determine whether there’s a good match as future teammates and it can give you a lot of transparency. We understand, however, that’s not always possible for everyone. (4 weeks).
2. A formal but practical interview--no trick/puzzle questions. (1.5-3 hours)
We usually leave work around 6 PM to have dinner with friends or family. As an early stage startup, we don’t want to hide this reality to prospective candidates: raw hours make a big difference in the impact we can make right now. This won’t always be the case but it is at present.
We often find ourselves working later on in the evening and on the weekend, but it’s mostly driven by personal ambition. We don’t have any expectations of anyone to do so.
Yes - we have raised $5.7M
Some of our investors include Shervin Pishevar, Josh Buckley, Global Founders Capital, Sound Ventures and Guy Oseary
Our technology is based on GAN (generative adversarial networks) methodology. The model is already trained to detect facial features/motions in our “driving video” (the video performance behind each song in our library) and apply them to a user’s still image to animate it.
As a result, the user receives a video output of their still image performing a lip-sync/dance to the chosen song.
We are using Python for our back-end, Swift 5 for iOS and Kotlin for Android.