Aug
3
15



Containers and the Search for the “Killer App”

VisiCalc for early PCs. E-mail for the Internet. SMS for mobile. Every major tech platform we’ve seen has had the benefit of a “killer application” that transformed it from “toy” or “cool project” into an indispensable, mainstream product.

Now that that we’re in the midst of what looks to be another major platform shift in the data center – this time with the layer of abstraction moving from physical infrastructure via the hypervisor to the OS via containerization – talk has centered around Linux containers and whether they represent a paradigm shift in how we build and deploy applications or if they are simply another instrument in the DevOps toolkit.

The relevant analog for mapping out the fate of containerization seems to be virtualization. Charting VMware’s history provides a hint of how container adoption and ecosystem development may unfold, but it’s far from a perfect corollary.

In 1999, VMware released Workstation which let developers run multiple virtual machines with different operating systems locally. This solved an acute developer pain around building applications that would work across different OS’s and environments. A couple years later the company entered the server market with ESX and vMotion which enabled live migration; a fancy way of saying you could move running VMs between physical hosts without taking the whole application down.

The VMware tool chain quickly spread through dev/test as developers could now build and test applications for different environments and then deploy them with a few clicks confident they wouldn’t break production (assuming proper config files were installed; hence the rise of config management tools like Chef, Puppet, etc.). In addition to this grass-roots, bottoms-up adoption, virtualization benefited from CIO-led, top-down initiatives to eliminate IT sprawl, improve server utilization and consolidate datacenters. The result, depending on who you ask today, is that anywhere from 75-90% of x86 workloads are virtualized.

Hardware virtualization, then, literally digitized the analog task of racking servers. It represented a step-function improvement over how IT could be provisioned and administered and how applications could be tested and deployed.

Now we’re seeing similar developer-led adoption of containerization, and sure enough there are myriad reasons why adopting Linux containers makes sense: from enabling application portability across compute and cloud infrastructures, to streamlining your deployment pipeline to liberating your organization from the VMware tax. But as we sit here today, many (including myself) contend that containers don’t represent as radical a step-function improvement over the tools used to solve similar problems as VMs did in the early 2000s. Nor is there a similar top-down, CTO-/CIO-led initiative to catalyze adoption. Consequently, what we’re looking for is the killer application that unlocks the value of containers for the mass-market.

What might those killer apps be? Here are three likely candidates:

  • “Dropbox-ized” dev environments – One of the most nagging engineering pains is provisioning and replicating developer environments across the org and then maintaining parity between those environments with test and production. Containers offer a way to encapsulate code with all of its dependencies allowing it to run the same irrespective of underlying infrastructure. Because containers share the kernel user space, they offer a more lightweight alternative to VM-based solutions like Vagrant, thereby letting devs code/build/test every few minutes without the virtualization overhead. Consequently, orgs can create isolated and repeatable dev environments that are synced through the development lifecycle without resorting to cloud IDEs which have been the bane of many devs’ existences.
  • Continuous deployment – As every company becomes a software company at its core, faster release cycles become a source of competitive advantage. This was highlighted in the recent Puppet Labs State of DevOps report, where it was revealed that “high performing IT organizations” deploy code 30x faster and have 200x shorter lead times leading to 60x fewer failures than their “low-performing” peers. It’s no surprise, then, that organizations are embracing continuous delivery practices in earnest. Containers, because of their inherent portability, are an enabler of this software deployment model. Instead of complex scripting to package and deploy application services and infrastructure, with containers scripts shrink to a couple lines to push or pull the relevant image to the right endpoint server and CI/CD becomes radically simplified.
  • Microservices – Microservices architecture refers to a development practice of building an application as a suite of modular, self-contained services each running its own process with a minimal amount of centralized management. Microservices itself is a means not an end, enabling greater agility (entire applications don’t need be taken down during change cycles), speed-to-market and code manageability. Containers, offering lightweight isolation, are the key enabling technology for this development paradigm.

Ultimately, containerization allows companies of all sizes to write better software faster. But as with any platform shift, there is a learning curve and broad adoption is a function of ecosystem maturity. We’re just now beginning to see the emergence of best practices and standards via organizations like the Open Container Initiative and the Cloud Native Computing Foundation. The next step is for a hardened management toolchain to emerge which will allow devs and companies to begin building out powerful use cases. And it’s with those applications that we will start to unlock the power of container technology for the masses.


Jul
20
15



Caspida: How to Win in Security

Last week, we were thrilled to share the news that our portfolio company, Caspida, was acquired by Splunk. Caspida will help Splunk tackle some of the most fundamental and complex challenges in security around privileged data loss and suspicious user activity and improve Splunk’s ability to detect, respond to and mitigate both advanced persistent threats and insider security threats.

The acquisition also marked a fantastic milestone in a journey that began over 10 years ago. We had originally met and backed Caspida founders Muddu Sudhakar, Christos Tryfonas and Karthik Kannan at Kazeon, so when the team approached us again the decision to fund them was a no-brainer.

Given this seminal moment, we thought it would be appropriate to take inventory of what’s happening in the security sector and share what we believe makes for a winner in this space.

Here is what we know today: threats are persistent. Hacks are inevitable and the adversary is likely already on the inside. Enterprise IT is now comprised of a catalog of hundreds disparate systems and services distributed on-prem, in the cloud and along thousands of end-points. Security Ops (SecOps) has less visibility on where applications are running and where data resides. This has created a significantly larger attack surface for bad actors which only promises to be made worse with IoT. In short, the corporate perimeter has been busted wide open and the bad guys are winning.

Correspondingly data can no longer be protected by the firewall alone. Antivirus, intrusion protection and detection, etc. are necessary but no longer sufficient. We’re now seeing the emergence a new breed of tools to enable pervasive, always-on security. These tools monitor the network, server, end-point, data and user and leverage smart algorithms to alert when something has gone awry. Breach detection and remediation are now hallmarks of an effective security toolchain. Caspida helped define this category of next-gen security tools.

The combination of accelerating and more potent threat vectors with a broader IT platform shift has created a security gold rush. Corporate budgets are mushrooming seemingly with every new disclosed breach and new companies are being formed to get a piece of the action. As you may expect VC funding (and noise) is at an all-time high. In this market, it is difficult to tell your head from you a$$, but Caspida provides a playbook how to win. These are the hallmarks that we as investors look for in any new security investment:

1)  Meaningful technical innovation – This is stating the obvious but a company must have a technical advance that enables customers to prevent, detect, remediate, respond to attacks better, faster and / or cheaper than before. Palo Alto Networks created an application-aware, adaptive firewall that performed better than traditional stateful inspection firewalls. FireEye innovated around behavioral detection vs. legacy signature-based detection. Similarly, Caspida was one of the first companies to innovate at the data layer in security, applying predictive and behavioral analytics to the TBs of security events generated by the modern enterprise.

2)  Business case – CISO’s do not buy technology, they buy solutions to problems. It doesn’t matter how fancy the technology is under the hood, what corporate buyers want to know is how it will make their lives easier. Caspida’s founders leveraged their background in large-scale machine learning to deliver visibility and insight to the enterprise that were previously locked up in logs and data silos.

3) Killer management + sales and marketing – Security is one of the few remaining tech purchases procured through a traditional, top-down enterprise model and that means relationships matter more than anything. Having a management team that can access the c-suite and a sales and marketing function that can run through walls will is the difference between winning and losing. Caspida was the founding team’s fourth startup working together – the trust, knowledge and network they’ve developed over the years were as, if not more, important to the company’s success than the core technical IP.

Finally, for us at Redpoint the Caspida exit further validated our desire to back repeat entrepreneurs. The deep trust between the firm and Muddu, Christos and Karthik enabled the guys to go and focus on execution exclusively and in doing so they built a killer company. We can’t be more thrilled for them and are confident this won’t be all of our last time around.

 

 


Jul
15
15



Market-Makers, Surfers and 10x’ers: A Model for Investing in Enterprise IT

Warren Buffet’s right-hand man and Vice Chairman of Berkshire Hathaway, Charlie Munger, credits much of his and the Oracle of Omaha’s success to an adherence to mental models, particularly in their power to guide investment decisions. Munger, in his 1994 commencement address at USC Marshall School of Business, elaborated:

…the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ‘em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form.

You’ve got to have models in your head. And you’ve got to array your experience—both vicarious and direct—on this latticework of models…

Mental models help investors make heads or tails of fact patterns to problem-solve quickly; something that’s become increasingly important as the velocity of companies formed and funded has accelerated to breakneck speed.

Most models tend to be deductive, stringing together premises believed to be true to arrive at a logical conclusion. For example, given a startup with little or no historical performance, VCs will default to evaluating the company across its market, management team and product and gauge alignment among those variables. If each variable and alignment among them is strong then they likely proceed with the investment.

                             image

Figure 1: Market-Management-Product Prism

Another approach is inductive. This involves starting from a specific observation and moving towards broader generalizations that can be reapplied to the specifics of a given opportunity. It goes something like this: company X and Y exited for over $1 billion each and had green logos (specific observation). Therefore companies with green logos yield better outcomes (generalization). Company Z has a red logo. Pass.

Clearly the previous example is an oversimplification, but it points to the fact that inductive reasoning can be dangerous when generalizations become dogma. After all, there are exceptions to every rule and often it’s those very exceptions that become breakout successes.

However, when used appropriately inductive models can be powerful short-hands. In particular, I’ve found that enterprise IT lends itself nicely to this approach. Why? Because by its nature the enterprise IT stack is a dynamic organism where interactions between stakeholders (customers, suppliers, partners, etc.) are tightly coupled and tend to repeat in cycles. Consequently, patterns emerge which can be mapped on new opportunities.

With that, I’d like introduce a model that I’ve found helpful in sorting through enterprise opportunities efficiently. This model holds that there are three types of winners in enterprise IT: the Market Maker, the Surfer and the 10x’er. Generally, if a startup doesn’t fall in one of these buckets, it earns a pass from me. Let’s unpack this by exploring the characteristics of each type of winner in more detail:

The Market Maker

image

Market Makers bring a discontinuous innovation to market and thereby become synonymous with the technology they spawn. Think Cisco with LAN switching and Oracle with relational databases.

Note that these companies do not have to be responsible for the original technical innovation, but most often are the ones who commercialize it successfully. SaaS was previously called ASP (application service provider) before Salesforce starting banging the “No Software” drum. Virtualization was invented at IBM in the late 1960s before VMware brought ESX to market in 2002. Similarly the earliest iteration of containers have lived in open source Linux code for decades before Dotcloud became Docker.

The defining characteristic of these companies is that they catalyze a platform shift and, often, an accompanying outbreak of commoditization that makes its way down the tech stack.

The Surfer

image

Surfers leverage a market dislocation catalyzed by a more general secular trend or by a Market Maker, and take advantage of unique conditions to bring innovations to market that serve an emerging customer need.

Cloudera developed a distribution of and tooling for Apache Hadoop at a time when unstructured data growth began to outstrip the capacities and capabilities of existing data warehouses and as commodity hardware invaded datacenters. Pure Storage was founded when the price/GB of consumer-grade flash had declined sufficiently to become acceptable for deployment in high-performance enterprise workloads. New Relic correctly identified a gap in the application performance monitoring market as Ruby on Rails usage exploded and more and more workloads moved from on-prem to AWS.

Surfers most often win by taking advantage of an industry-wide technical leap forward in their own product or by resolving bottlenecks that preclude customers from capitalizing on an overarching secular trend. In doing so, the innovators in question position themselves to ride the cresting tsunami that washes over the industry.

The 10x’er

image

The10x’er may not have the benefit of a unique market opportunity, and, in fact, is often operating in a decelerating market dominated by one or a handful of incumbents.  However, these companies have a core innovation that enables them to bring to market a product that is an order of magnitude superior to incumbent vendors’ solutions along one or multiple key customer dimensions (performance, cost, time-to-value, etc.).

Tableau spun out VizQL technology out of Stanford which enables data visualizations with simple drag and drop functions, empowering line-of-business to benefit from sophisticated BI. MongoDB became the fourth most widely adopted database in the world in only seven years by simplifying database provisioning and management for developers. More recently, Slack has up-ended enterprise collaboration by creating a seamless, light-weight messaging experience where IRC and e-mail fell short.

The bottom-line with 10x’ers is they represent a tangible and significant ROI benefit for customers relative to incumbent solutions.

*   *   *   *   *

The delineations between these classes of winners are far from absolute – a 10x’er could very well be riding a wave that can ultimately help crown them a Market Maker – and, in fact, the most successful companies will have several forces working in their favor.

Mongo flattened the learning curve for developers to get up and running with a database, but also benefited from the broader NoSQL boom. In doing so, Mongo has become the poster-child for non-relational databases. Docker’s adoption has been buoyed by a shift to distributed application architectures and the DevOps, and accompanying continuous deployment, wave. Similarly, VMware benefited from the general trend around IT consolidation.

The takeaway is that that great companies are not built in a vacuum. The tech stack is an ever-evolving, dynamic system where a small change in one part of the stack can send shockwaves through the entire ecosystem. Correspondingly, at a given moment in time there exists a set of conditions which creates opportunity. Having a set of mental models you can lean on as an investor allows you to spot and capitalize on those opportunities faster.

 


Jul
13
15



How To Determine Your Lifetime Value

This post is part of an ongoing series where I practically walk through important calculations, metrics and unit economics for consumer internet businesses.  Today, we compare several different types of lifetime value (LTV) curves.  

In my prior posts, I explained how to properly calculate revenue and margins, and then using those principles, I discussed how to properly calculate LTV.  As you look at LTV curves, it becomes apparent that there are 3 main types – 1) Exponential, 2) Linear, and 3) Decaying.  It is important to understand each type, see below: Picture1Again, LTV here is the revenue generated by an average user in a particular month.  The math is based on the retention of your users, or cohort retention.  Add each month up over time and you get a cumulative LTV curve.  Variables that affect LTV for consumer services are typically things like user retention, average order value and order frequency.  Let’s quickly examine each type of curve:

Exponential LTV – This is the best possible curve and is a mark of a sticky consumer service.  The slope of the curve steepens because your user cohorts are generating more incremental revenue over time.  Perhaps retention is getting better, or order frequency is rising, or maybe average order value is going up.  To illustrate, if a user is ordering a single product for $60 each month, your LTV curve would become exponential if that user started ordering TWO $60 boxes each month, and eventually even THREE.  Or, if the price of the good WENT UP to $100.

Uber is a great example of exponential LTV.  When the service first started, users used the service infrequently, say once a week.  Today, Uber has better liquidity and better service, so the average user is probably using Uber several times per week.  Having a business with exponential LTV means that if you finish a year with $100M of revenue, you can basically do zero marketing the next year and your business will still grow at a healthy clip.

Linear LTV – Linear LTV, though clearly not as exciting as exponential LTV, is generally a healthy thing.  It implies a consistent user behavior and a sticky service.  If you have an ecommerce business and users are buying $300 worth of goods every single month without fail, then that is linear LTV (and very impressive).  If you finish the year with $100M of revenue, you can do zero marketing in the next year and still generate $100M of revenue the following year without doing a whole lot.

Decaying LTV – Decaying LTV means that you are getting a decreasing amount of revenue per user each month, e.g. if a user was spending $300 per month initially and eventually decreased to just $150 per month.  If a business finished the year with $100M of revenue and spent nothing on marketing the following year, then it may only retain a small portion of that revenue.  I sometimes call this the “treadmill effect”, in that the larger you get as a business, the faster you have to run in order to continue growing the business while making up for lost revenue.

Decaying LTV is not always that bad, particularly if you are getting really good “payback”, that is, generating enough revenue per user in the early days in order to cover your customer acquisition costs quickly.  If you’re selling a $1,000 product and acquiring users for next to nothing, then it probably doesn’t matter if that user never returns.

Parting Thoughts

Unit economics can be tricky.  In a future post, I’ll discuss the right questions to ask yourself when thinking through LTV and payback.

- See more at: http://mahesh-vc.com/how-to-determine-your-lifetime-value/?utm_campaign=Mattermark+Daily&utm_source=hs_email&utm_medium=email&utm_content=20508918&_hsenc=p2ANqtz-_j_OwT2C3BWFi560fuSftNp06ke7AiY6Ehb52LvYf2puJTH2a9yh0g39SbasDYIkqXLWGQkGjU3A-Ortwt2lVZCXQTgQ&_hsmi=20508918#sthash.KwygH2rV.dpuf


Jul
2
15



DockerCon 2015: Outside the Echo-chamber

DockerCon tore through SF last week and the feeling is that we are at the apex of the hype cycle. Fear not, we at Redpoint are here to (attempt to) distill signal from noise. Here’s a recap of the top story-lines as we see them along with some thoughts…

You down with OCP…?!

What happened: Docker and CoreOS got on stage, kissed and made up and announced the Open Container Project (‘OCP’). OCP is a non-profit governance structure, formed under the Linux Foundation, for the purpose of creating open industry standards around container formats and runtime. You may remember back in December ’14 CoreOS made headlines by announcing rkt, an implementation of appC, the company’s own container image format, runtime and discovery mechanism, which, in contrast to Docker’s libcontainer, was open, both technologically and in its development methodology. Then in May at CoreOS Fest, CoreOS’s inaugural conference, momentum for appC appeared to be gaining steam and image format fragmentation seemed inevitable. Instead, a mere seven weeks later, it appears Docker and CoreOS are willing to put aside differences to work together (and with the likes of Google, Amazon, Microsoft, Red Hat, and Intel) towards an open container spec.

Our take: The big winner is the broader container ecosystem. There are at least half dozen credible alternatives to Docker’s libcontainer emerging, and while competition is generally a good thing, the introduction of multiple different image formats creates ecosystem fragmentation which constrains customer adoption and broader momentum. Consolidation around the OCP spec will ensure interoperability while enabling vendors to continue innovating at runtime. More importantly, by agreeing on low-level standards, the community can move on to solve higher-order problems around namespaces, security, syscalls, storage and more. Finally, the loser in all this appears to be the media now that there’s, at very least, a ceasefire in the Docker-CoreOS war.

Docker Network and more dashed startup dreams

What happened: In early March of this year Docker acquired Socketplane to bolster its networking chops and the fruits of that acquisition were displayed in a new product release called Docker Network, a native, distributed multi-host networking solution. Developers will now be able to establish the topology of the network and connect discrete Dockerized services into a distributed application. Moreover, Docker has developed set of commands that enable devs to inspect, audit and change topology on the fly – pretty slick.

Our take: The oft-forgotten element to enabling application portability is the network – it doesn’t matter if your code can be executed in any compute substrate if services can’t communicate across disparate network infrastructures. Docker’s “Overlay Driver” brings a software-defined network directly onto the application itself and allows developers to preserve network configurations as containers are ported across and between datacenters. The broader industry implication here is that Docker is continuing to platform by filling in gaps in the container stack. The implication for startups? You will NOT build a large, durable business by simply wrapping the Docker API and plugging holes.

Plug-ins and the UNIX-ification of Docker

What happened: Docker finally capitulated to industry demands and announced a swappable plug-in architecture and SDK which will allow developers to more easily integrate their code and 3rd-party tools with Docker. The two main extension points featured were network plugins (allowing third-party container networking solutions to connect containers to container networks) and volume plug-ins (allowing third-party container data management solutions to provide data volumes for containers which operate on stateful applications) with several more expected soon.

Our take: For a year now there’s been an uneasy tension between Docker and the developer community as Docker became less a modular component for others to build on top of and more a platform for building applications in and of itself. The prevailing fear was that in Docker’s quest to platform, it would cannibalize much of the ecosystem, create lock-in and stifle innovation. Docker’s party line has always been that “batteries are included, but swappable,” implying you can use Docker tooling out of the box or swap in whatever networking overlay, orchestrator, scheduler, etc. that works best for you.  The plug-ins announcement is a step in that direction as it appears Docker is finally not only talking the UNIX philosophy talk, but walking the walk.

Container Management Mania

What happened: Whether it’s called “containers as a service,” “container platform,” “microservices platform” or plain old “PaaS”, it’s clear that this is the noisiest segment of the market. We counted no less than 10 vendors on the conference floor touting their flavor of management platform.

Our take: Everything old is new again. The evolution of container management is analogous to that of cloud management platforms (“CMPs”) when virtualization began invading the datacenter. There were dozens of CMPs founded between 2006 and 2010 the likes of Rightscale, Cloud.com, Makara, Nimbula, etc. Several have since been acquired for good, but far from great, outcomes, and the sea is still awash in CMP vendors competing feature for feature. Correspondingly as the compute abstraction layer moves from the server (hypervisor) to the OS (container engine), a new breed of management platform is emerging to provision, orchestrate and scale systems and applications. Will the exit environment this time around mirror the previous cycle?

*   *   *   *   *

Stepping out of the echo-chamber, the big question remains around adoption. There are some technological gating factors that will inhibit enterprise deployments in the short-term – namely persistence, security and management – but the overwhelming constraint holding back containers appears to be general lack of expertise and established best practices. The good news is that these are “when” not “if” issues that pertain to ecosystem maturity, and the steps taken by Docker last week will only help accelerate that process.

With the groundwork laid, we see an exciting year ahead for the container community.  The inevitability of container adoption only feels more inevitable now.  There are many hard problems to solve, but hopefully (fingers crossed) there is now more alignment within the community.  Start-ups and large enterprise companies alike can begin, in earnest, the real work required to drive broad adoption of this technology in datacenters.  Hopefully we will look back a year from now and feel like this was the year that the technology moved beyond the hype phase to real adoption.


May
22
15



Great Companies Are Built by Great People: Redpoint Seeking Community + Events Manager

 

Over Redpoint’s 15 year history, we’ve been lucky to partner with exceptional entrepreneurs who change the world with great ideas and companies.  In the process we’ve built a deep and wide community of founders, friends and partners in the industry.   One of the best parts of our community is that those in it love to convene, connect and compare notes on everything from practical advice to shared common experiences.

We are currently seeking a talented, detail oriented and ambitious Community + Events Manager with 2-3 years of experience who can help bring our network together on a consistent basis through curated events and social channels.   This person will work with myself and our head of marketing and will be an important part of the Redpoint team. This is a brand new position that can be based out of San Francisco or Menlo Park.

More details are below. If you know someone we should talk to please send a note to me at [email protected]

Community + Events Manager Profile

You are passionate about start-ups with a love of convening entrepreneurs around common goals and interests. You know that big ideas can change the world, but that standing out through smart execution means everything.

You get excited about the power and possibility of thoughtfully curated events and amplifying them with social channels. You’re not just a community manager, you’re an evangelist for your people. You know who they are, their passions, idols and heroes and what keeps them up at night. You learn by listening and observing as much as participating. You plan events in your sleep and obsess over the details.

In this role you will be tasked with organizing a regular cadence of meet ups, dinners and workshops.  You will build and maintain a portfolio company resource center of actionable tools, content, and other relevant materials to help our founding teams. You are  just as comfortable on social media as you are IRL and you know how to integrate the two. Maybe you just saved this post on Pocket.  If you had to summarize Hooked in 140 characters you wouldn’t be fazed.

You know how much your work matters so you also are passionate about using the right tools to measure impact and evaluate programs.

Do great work and make an impact with a team who loves to support founders!


May
13
15



Announcing Redpoint VI

Today we are pleased to share the news that Redpoint has closed Redpoint VI, a $400 million early stage fund. Like its predecessor funds, Redpoint VI will be invested in Seed, Series A and Series B rounds of the next generation of industry-defining consumer and enterprise startups.

Redpoint was founded 15 years ago based on the core values of teamwork, respect and fairness and those principles still guide everything we do today. Since 2014 our companies have had 4 IPOs and 6 M&A events with a market cap in excess of $8 billion.  In total Redpoint  manages $3.8 billion with 434 companies funded and 136 IPOs and M&As.

But numbers only tell part of the story. Redpoint has been lucky to partner with many exceptional founders and develop deep, long-term relationships with people who change the world. There’s no better example than Andy Rubin, the creator of Android, and an entrepreneur Redpoint backed twice before becoming an entrepreneur-in-residence and then joining the firm this year as a venture partner.

Entrepreneurs are re-imagining the world at an ever faster pace. With Redpoint VI, we are excited to continue supporting founders pushing the forefront of technology, creating new markets, and transforming industries as we have with Stripe, Sonos, Twilio, PureStorage, HomeAway, NextDoor, RelateIQ, Zendesk, 2U, Qihoo 360, Kabam, Beepi and Luxe, among many others. We look forward to what comes next in support of the next generation of exceptional entrepreneurs.


May
12
15



Does Your Start-Up Know How to Interview?

Knowing how to effectively interview and evaluate candidates is critical to finding the right talent,  yet most companies don’t take the time to help their teams learn this important skill.  If you don’t train your teams to properly interview candidates you can end up hurting your ability to attract rock star talent.  The good news is that interviewing is a skill that can be taught.

With all of the advice about various types of technical interviews and the “right” way to interview a prospective candidate, how does a start up figure out what type of interviews are best for them? This was the focus of a recent panel event I moderated hosted by the folks over at RockIT Recruiting. The panel was a mix of engineering and recruiting leaders and founders. Here’s a recap of what we discussed.

Find Your Mojo

There is no one-size-fits all solution to figuring out what the right type of interviews are for your company. Be thoughtful about what’s right for your own circumstances and avoid automatically trying to emulate larger companies. Aline Lerner, engineer-turned-recruiter and founder of interviewing.io, pointed out that these larger companies may already have a strong brand which in many cases gives  them a larger supply of candidates. Start-ups need to view the interview as an opportunity to evangelize the company’s vision.

Define the Right Questions

When determining what actual interview questions to use, startups need to use challenging questions but not to make them so hard that no one can answer them. Jared Friedman, Co-founder and CTO of Scribd, suggested having questions that start simple and get progressively harder.  He doesn’t expect that the candidate will necessarily finish a lengthy question in the allotted time, but he likes to see how the candidate approaches it. Another tact is to ask average or slightly above average questions but then grade the candidate harder. If a candidate doesn’t do well on an average question, it tells you more than if they don’t do well on a very hard question.

Emil Ong, Principal Software Engineer and Engineering Lead from Lookout, described their practice of “Hackernoon Days” where people across the company participate in mock interviews with each other. This is a great way to evaluate new interview questions and engage employees as part of the process. Scribd built a wiki of interview questions that have been added to over time, largely from actual scenarios of problems they’ve dealt with over the years. They make a point to test market questions with Scribd’s actual engineers to understand how to qualify and rank a “good answer”.

Assess the Right Fit

An important consideration when hiring people is culture fit, but this is also an area many interview teams don’t know how to assess. Aline from interview.io looks at technical culture fit – is the culture pragmatic or academic? Does the team prefer to just get things out the door or do they believe in testing? A way to see if there is a match to your tech culture fit is to ask questions about their past engineering environments– what they liked, what they would do to improve it, what broke, and how it could have been prevented. You can also ask the candidate outright what defines a good set of best practices to see if it’s aligned with yours.

A critical part of being able to assess for culture fit is to understand what your culture is in the first place. Soham Mehta, who founded Interview Kickstart after spending six years at Box as an Engineering Director, advices companies to define their culture as simply as possible. Make sure employees know your culture and know how to describe it. Then you can evaluate against it.

The market is competitive and if start-ups don’t put the effort into knowing what is important to them – values, culture and technical skills – they will struggle to hire the right people. Your talent brand grows stronger as your company gets more skilled at interviewing, so it’s a good investment of time to put thought into your interviewing process in the early days.  Start off by defining your values because that is what everything in your business, especially the interview process, needs to reflect.

 


Apr
14
15



Duo Security: Making Advanced Security Available to All

Today we are excited to announce that Redpoint is leading a growth investment in Duo Security, an emerging leader in the two-factor authentication market. Dug Song and his team have architected an enterprise security solution that is elegant, highly effective, and easy-to-use and deploy. We’re proud to back such a renowned security team expanding the two-factor authentication and access security market to all enterprises.

It is no secret that security breaches are the frequent subject of newspaper headlines. Hackers are becoming more and more sophisticated, and the cost of each breach continues to rise. One of the most common causes for these high-profile security breaches is employee credential theft. With more employees logging in remotely to a variety of corporate data and cloud services, hackers with stolen credentials can access large amounts of sensitive corporate data. While virtually all enterprises have advanced perimeter security defenses, these defenses are not optimized to stop a hacker logging in to a corporate server or cloud service with stolen, yet valid, employee credentials. Two-factor authentication solutions emerged to solve this problem initially through hardware-based token products that proved to be very expensive and cumbersome to use and (not immune from being hacked, as RSA found out a few years back). As a result, until recently, only the largest enterprises embraced two-factor authentication, leaving thousands of other enterprises vulnerable to security breaches based on credential theft.

Recognizing this opportunity, Dug Song and Jon Oberheide (both previously of Arbor Networks), set out to solve the problem by creating a flexible, easy-to-use, two-factor authentication solution for all enterprises leveraging the ubiquity of the smartphone. With Duo Push, users simply type in their credentials and with one-tap on their smartphones they can log into their corporate VPNs, SaaS platforms and on-premise applications. Entire enterprises can be deployed in a matter of hours through a fast self-enrollment process, removing the need for complicated provisioning of hard tokens or clunky mobile apps and SMS codes. Additionally, Duo Security provides easy integration with dozens of cloud apps and allows enterprises to create many custom security features and policies.

Sometimes the most interesting opportunities happen when the right technology meets the market at just the right time. In the case of Duo Security, enterprises of all sizes are realizing the perils of credential theft and the need for two-factor authentication for their employees. By providing the most secure, least invasive, most usable solution to this problem, it is no wonder that Duo Security has managed to quickly grow and land over 5,000 customers including: Facebook, NASA, Box, Paramount, Toyota, and WhatsApp. Dug and his team are committed to providing their customers advanced security solutions beyond two-factor authentication, and just today announced Duo Platform which allows IT teams to define policies for access, automate enforcement of controls based on risk, gain visibility into access-related security threats and get insight into the security profile of end user devices.

Redpoint’s growth fund looks to partner with emerging market leaders and founders who want to build large independent companies. We are very excited to be working with Dug and his team, which now includes Zack Urlocker as COO. Zack previously held that role at Redpoint portfolio company Zendesk and no doubt plans to bring the successful Zendesk growth playbook to Duo Security.

Congratulations Dug and team on the tremendous progress so far. We are looking forward to working with you on the journey ahead!

 


Apr
6
15



Andy Rubin Joins Redpoint

Most people know Andy Rubin as the creator of Android. I know Andy as a twenty-something engineer at WebTV who had built a sleeping loft above his cubicle so he didn’t have to leave the office. 20 years, 3 successful startups, and 2 billion Android devices later, Andy is joining Redpoint as a Venture Partner.

WebTV was one of the first companies I ever backed as a venture investor. It had one of the most impressive groups of technical founders with whom I have ever worked, and Andy was one of the first engineers they hired. I noticed Andy was exceptional right away, not only because of his loft, but because he possessed an uncommon combination of technical skill and vision. He developed the first platform that connected the web to people’s televisions. WebTV grew rapidly and ultimately sold to Microsoft for $500 million.

When Andy left Microsoft and co-founded the smartphone pioneer Danger, Redpoint was early to invest. As CEO, Andy built Danger from nothing into the must-have tech gadget of its time with over 2 million devices, and fans like Paris Hilton and Snoop Dogg. Danger invented and deployed many of the core smartphone services we use today. Among Danger’s technical firsts were integrated messaging, mobile video, over-the-air OS updating, and the app store. Soon after Danger achieved scale, Andy left and joined Redpoint as an Entrepreneur in Residence. He was a colleague to Satish Dharmaraj who was incubating Zimbra in our offices and later, also joined us a Partner.

It was in Redpoint’s offices that Andy first conceived Android. Ten years later, Android has become one of the most widely adopted technologies in the world, an operating system powering billions of devices. Andy’s expansive vision isn’t just limited to technology. Android’s success depended on discovering the right go-to-market model and pursuing the key business relationships that were required to make it the enormous success it is today. Andy sees what’s possible well before most people.

Whenever I visit Andy, he always has the newest thing, the yet-to-be available gadget.  Years ago, he smuggled from Japan the smallest flip phone in production. He kept robotic dogs as pets. Andy bought one of the first Segways and immediately drove it up a half-pipe, just to see how the gyroscopic systems would react. At Google, he modified a huge auto manufacturing robotic arm to make a cappuccino and stamp the Android logo on it in chocolate. Later, he had a near life-size humanoid robot that followed you around. It’s this wonder and passion for technology that enabled Andy to change the world three times over.

We started talking in earnest about Andy joining Redpoint twelve months ago over a cup of coffee at his wife’s bakery in Los Altos. Andy had some big ideas about the evolution of hardware and software, but he wasn’t sure whether he would pursue them through his own hardware-focused incubator, Playground, or as a partner at Redpoint. Ultimately, we decided he should do both, and so we partnered a fourth time. Andy has become a Venture Partner at Redpoint, and Redpoint is the first investor in Playground.

Andy is a perfect complement to our team at Redpoint helping founders of mobile, marketplace, SaaS and infrastructure companies achieve their ambitions. There isn’t a founder out there that won’t benefit from Andy’s ideas, experience and industry connections.

Andy is already evaluating and backing companies with us, and we’re excited to see how he can help Redpoint founders moving forward. We’re thrilled that he’ll be a part of our team identifying the next great startups and working with teams to realize their full potential.