Filtering by Category: life

Creating Career Options in Tech

The Geek Whisperers is a great podcast focused on the non-tech side of tech careers — mentorship, career building, leadership, etc. They had me on a couple of GlueCon’s ago to discuss how to think about career options and advancement in tech. You can listen to the whole thing here. 

Get exposure

In general, you can’t really know what all your career options are. But what you can do is set yourself up in a situation or create an environment where options present themselves. You can sort of maximize the serendipity and the optionality around you. For me that was moving from a role that was buried in an organization to a role that exposed me to a larger diversity of people and projects.

In an engineering position I had to deal with architects, so I became an architect. As an architect I had to deal with product managers, so I became a product manager. As a product manager, I had to deal with marketing roles so I moved to one of those.

Take adjacent roles

It’s hard to totally jump two or three degrees from what you’re doing, but what I think you can do relatively easily is to move to an adjacent discipline. Then you get exposed to a bunch of new things, from which you can pick another adjacent discipline.

I have a personal mantra associated with this: if you think someone (or everyone) in a particular role is an idiot, you probably don’t understand it so you should go do it to figure it out.

Of course, my journey has been totally accidental. In each case I was either unhappy with what I was doing or unhappy with what people in the adjacent job were doing and wanted to fix it. Or sometimes both. So I would just start doing the job until it was self-evident that people would have to fire me to stop me from doing it.

What I’ve found consistently in tech is that you can basically do any job you want, if you just go ahead and do it. The person you’re working for right now might not let you do it, but someone else is going to let you do it.

You can basically do any job you want, if you just go ahead and do it.

That’s a privileged statement. I don’t know if that’s actually true for non-white-or-asian-males. And if it is , the barriers to entry or transition are likely much higher.

In any situation, there’s someone on the other side of the table from you. That’s adjacent. Whoever that is, whatever that role is, you should be able to do some part of it. If you’re not able to start doing it, it’s probably too many steps away.

Tweet without intent

Twitter is responsible for my entire career. Unbeknownst to myself, I built a reputation and people started approaching me to take on new roles because of my interactions there.

What is it about Twitter that builds credibility? My theory is that the validation provided by someone’s public persona expressing what it is that they do on a regular basis is kinda like establishing bona fides in a meeting. But constantly, every day.

Everyone has an individual style. My style is to try not to have an intent in any forum where I’m representing myself (and not a business). I try to use Twitter the way I would use a party, or any other social situation. Whatever I would normally do. I tweet when I have a coffee because in a social situation it would be normal for me to walk up with a cup of coffee, so doing it on Twitter is normal.

I know people who do use Twitter with intent: intent to get a job, to raise their profile, etc. It works for them. So I’m not saying you shouldn’t do that. It just doesn’t work for me.

 

we are not who we think we are

Presented at Velocity NY 2015. About 2/3rds of the way through, I lost my way. But it seemed to work anyway. What I attempted to do was to put the subtext on the slides while I was presenting the overt part.

Let’s talk about the epistemology of the self. 

We, as human beings with human brains and human mechanisms, build models. It’s how we understand the world. Our models influence what we perceive, how we grasp it, and what we can make of those interactions with the world. They are the boundaries beyond which we’re simply unable to grasp what we encounter (and frequently reject), the basis of all bias, why we fail to understand each other.

We take the world and divide it up into neat little categories, according to our models, that seam whole and complete and seamless, with hard black and white boundaries. These categories make it easier to understand and deal with the multivariate complexity of the world. They make it easier to scale our brains. 

BLACK AMERICAN EUROPEAN GEEK MALE FEMALE TRANS HOMOSEXUAL DESIGNER  LESBIAN BI ASIAN CREATIVE NERD WHITE ANALYTICAL INTP ENFJ NERD HISPANIC WASP FOB ABCD CONSERVATIVE GOTH LIBERAL RAVER AD NAUSEUM

And we do this to ourselves. I am a set of labels and categories and group memberships. These are a kind of index, that helps me place myself in the world, in my model of the world, relative to everything else. It gives me a comforting feeling. It’s the comforting story I tell myself about who and what I am. A kind of self-myth.

FIRST GENERATION SON OF IMMIGRANTS FIRST TO GO TO COLLEGE INTROVERTED SOCIALLY AWKWARD HUSBAND STEP-FATHER STEP-SON FISCALLY CONSERVATIVE SOCIALLY LIBERAL CYNICAL REALIST CLOSET OPTIMIST ADAPTABLE INDEPENDENT GROWN ASS MAN LOVING SUPPORTIVE EMPATHY-ABLE ONE OF THE SMARTER PEOPLE IN ANY ROOM CAN UNDERSTAND ANY TECHNICAL THING SPECIAL STRONG WILLED

It's a useful and important myth. Without the self-model, we'd have no way of understanding anything. It’s the ultimate frame of reference. Without it, you wouldn’t know how far to move your fingers to hit the keys or how to communicate with anyone.

And it's built on truths--I am all those labels. They're just not necessarily complete or accurate. The messy, less clear cut bits that get abstracted away by the model inevitably make themselves known in ways we can’t even see most of the time.

GREW UP FATHERLESS -> TRUST ISSUES
GREW UP POOR -> FINANCIAL INSECURITY AND NEUROSIS

It’s not just that there are things below the surface that we don’t know about each other. It's that there are things below the surface we don’t even know about ourselves. They come out in our assumptions, our biases, our automatized behaviors.

EMPATHY IS NOT TO BE EXPECTED, SO DON'T HAVE ANY
SUPPORT WILL NOT BE GIVEN, SO DON’T GIVE ANY
YOU ARE AN OUTSIDER, SO MAKE OUTSIDERS OF OTHERS

Other people don't really see you; they see what you present. You can't really see yourself; you only see what you present to yourself.

EXTERNAL: CONFIDENT AND COMPETENT HIGH ACHIEVER
INTERNAL: UNDERDOG AND PERENNIAL OUTSIDER
UNDERNEATH: DESPERATE FOR ACKNOWLEDGEMENT 

People go to extreme lengths to preserve the external identities they present to the world. We can go to similarly extreme lengths to preserve the internal identities we present to ourselves, to not be faced with the gaps between who and what we think we are and who and what we actually do.

Think about what happens when someone points out something you say or do that doesn't fit with your self-model. What do you do? I tend to dismiss it, explain it away, deny it--even completely fail to see it, blinded by my own myth. Our deep need to rationalize who we believe ourselves to be with what we actually end up doing leads us to covering up the reality of what we are. 

LOGICAL, SO DON’T DEAL WITH EMOTIONS
GOOD INTENTIONS, SO DON’T CONSIDER CONSEQUENCES
OPEN MINDED, SO DON’T CREATE DIVERSITY

That reality is exactly what needs to change when need to understand something new, when we need to see things differently, when we need to do and be different. It's not enough just to change categories and labels. And to change the reality means to first see it, acknowledge it, and accept the fact of it. 

Like holding strong opinions weakly to be open to new ideas, maybe we should hold strong identities weakly

Instead of valuing who we are, we should value what we do

Because we are not who we think we are, but we can’t help becoming what we do.

Maybe we can take our self-labels, self-categorizations, self-models, and self-myths--with a grain of salt. Maybe we can unmake ourselves, in order to remake ourselves. Models can be taken apart and new models built out of the pieces. A kind of dialectic of the self.

INSECURE AND FATHERLESS -> CONFIDENT THROUGH FATHERHOOD

What would happen if you unmade yourself?


This presentation was built out of the pieces of other presentations given over the past year: 

the dangers of models

All models are wrong; some are useful.

Disconfirmatory evidence is more important than confirmatory evidence.

Actively seek model invalidation.

Every thing was built in some context, or scale. Reading primary sources, or learning how/why a thing was made, is essential to understanding  the conditions that held and knowing bounding scales beyond which something may become unsafe.

This is something I think about a lot. It's true in software, distributed systems, and organizations. Which is the world I breathe in every day at SignalFx.

It began to knit together around OODA:

  • ooda x cloud-- positing how it OODA relates to our operating models
  • change the game-- the difference between O--A and -OD- and what we can achieve
  • pacing-- the problem with tunneling on "fast" as a uniform good
  • deliver better-- the real benefit of being faster at the right things
  • ooda redux-- bringing it all together

OODA is just a vehicle for the larger issue of models, biases, and model-based blindness--Taleb's Procrustean Bed. Where we chop off the disconfirmatory evidence that suggests our models are wrong AND manipulate [or manufacture] confirmatory evidence. 

Because if we allowed the wrongness to be true, or if we allowed ourselves to see that differentness works, we'd want/have to change. That hurts.

Our attachment [and self-identification] to particular models and ideas about how things are in the face of evidence to the contrary--even about how we ourselves are--is the source of avoidable disasters like the derivatives driven financial crisis. Black Swans.

  • Black swans are precisely those events that lie outside our models
  • Data that proves the model wrong is more important than data that proves it right 
  • Black swans are inevitable, because models are, at best, approximations

Antifragility is possible, to some scale. But I don’t believe models can be made antifragile. Systems, however, can.

  • Models that do not change when the thing modeled (turtles all the way down) change become less approximate approximations
  • Models can be made robust [to some scale] through adaptive mechanisms [or, learning] 
  • Systems can be antifragile [to some scale] through constant stress, breakage, refactoring, rebuilding, adaptation and evolution— chaos army + the system-evolution mechanism that is an army of brains iterating on the construction and operation of a system

The way we structure our world is by building models on models. All tables are of shape x and all objects y made to go on tables rely on x being the shape of tables. Some change in x can destroy the property of can-rest-on-table for all y in an instant.

  • Higher level models assume lower level models 
  • Invalidation of a lower level model might invalidate the entire chain of downstream (higher level) models—higher level models can experience catastrophic failures that are unforeseen 
  • Every model is subject to invalidation at the boundaries of a specific scale [proportional to its level of abstraction or below]

Even models that are accurate in one context or a particular scale become invalid or risky in a different context or scale. What is certain for this minute may not be certain for this year. What is certain for this year may not be certain for this minute. It’s turtles all the way down. If there are enough turtles that we can’t grasp the entire depth of our models, we have been fragilized and are [over]exposed to black swans.

This suggests that we should resist abstractions. Only use them when necessary, and remove [layers of] them whenever possible.

We should resist abstractions.

Rather than relying on models as sources of truth, we should rely on principles or systems of behavior like giving more weight to disconfirmatory evidence and actively seeking model invalidation. 

OODA, like grasping and unlocking affordances, is a process of continuous checking and evaluation of the model of the world with the experience of the world. And seeking invalidation is getting to the faults before the faults are exploited [or blow up]. 

Bringing it all back around to code--I posit that the value of making as many things programmable as possible is the effect on scales.

  • Observation can be instrumented > scaled beyond human capacity
  • Action can be automated > scaled beyond human capacity
  • Orientation and decision can be short-circuited [for known models] > scaled beyond human capacity
  • Time can be reallocated to orienting and deciding in novel contexts > scaling to human capacity

That last part is what matters. We are the best, amongst our many technologies, at understanding and successfully adapting to novel contexts. So we should be optimizing for making sure that's where our time is spent when necessary.

Scale problems to human capacity.

fantasy founder - elder interfaces

Continuing an occasional series about products and companies that I’d like to see built, or build.

Over the years, I’ve tried to teach my grandmother to use computers, dumb phones, smart phones and tablets--with no success. She will learn one or two things (command sequences) to get something done for a little while, but nothing sticks.

Facts:

  • English is her 5th language (depending on how you count subcontinental languages).
  • She hasn’t had much schooling, up to 5th grade maybe.
  • But she’s sharper than most people I know, having cogent conversations about geopolitics and doing relatively complex financial math in her head.
  • Her formative years were in a developing country, traumatized by mob rule, lynchings and the like.
  • Her first personal exposure to computers was in her 40s, and her first attempt at using computers was in her 60s.
  • Recently, she had a stroke and lost some significant English comprehension circuitry.

Desktops, folders, files, that there are different kinds of files, applications, trees of objects, windows, visual controls, input controls, control contexts, focus, local vs remote, online vs offline, different affordances in different mediums, different affordances in different contexts on the same medium, contextual clues built into small variances in visual presentation, the boundaries that separate one object from another, the different kinds of boundaries presented for different kinds of objects in different mediums or contexts—are all bound to and presume a certain cultural context and assume a certain set of preexisting models of how the world is organized and works.

The cultural assumptions built into our interfaces render them incomprehensible.

How we might overcome them:

  • No files: If you didn’t grow up with computers or with desks and file folders, the metaphor doesn’t work. It doesn’t translate into the model which tells you that this thing is an object and the same form of object can have different content, etc. Better would be just apps which find and organize related content, the Apple way — stepping away from having to know how things are made and work to only needing to know what it is you want to do.
  • No exposure of the filesystem: An extension of the last point: no folders, no browsing, no object tree, no files—just actions. That’s what the machine exists for and that’s why we go to it, to do something. Tool and action are fundamental enough concepts to transcend cultural context.
  • Feedback on every action: I noticed that my grandmother would frequently do something on a computer or tablet and not know that she had done it or not believe that it had happend, especially things that are ephemeral like copying text. When you don’t have a model for how the system works, you need explicit feedback that the thing you’re trying to do was done or that you’ve done a thing, period. Strong visual, tactile and/or audio feedback for every action taken to tell you not just that you have actually done it, but that the intent has been registered by the system.
  • Larger tolerances: Because fine motor skills deteriorate with age, getting shaky fingers right on a button is an unreasonable expectation, soclose enough has to be sufficient.
  • Space between things: Corollary to the last point, what defines close enough should be consistent and big enough that it becomes intuitive (as an affordance) and feels easy. Which means sufficient space between all control elements to allow for not getting right on the button — as in, the whole grid square where the button is present is an active control.
  • No menus: Big buttons with big words and/or big icons, all the way; because glaucoma, macular degeneration, etc.
  • Less distractions: Wallpapers with objects in them or that could be confused for objects, window-dressing, flashy-visual-effects that look pretty but don’t help in navigation, orientation, or feedback create noise that makes it harder to adapt to a new environment. It’s like when you’re learning a foreign language—it’s much harder to understand what’s being said in a crowded, noisy cafe than it is in a quiet setting where you can focus on the one signal that matters instead of on trying to filter out the dozens that don’t.
  • Click or no click: The whole overloaded clicking — left, right, middle, double, triple, click+drag, blah blah blah — imposes a significant burden on the user to understand and remember all the things that can be done with a single input element. Pair that with deteriorating fine motor skills, deteriorating sight, and lack of clear feedback on whether or not an action was taken and you have a recipe for confusion. Better: there is just click, or no click.
  • Limit controls and contexts: Even when I would teach my grandmother something successful, frequently how I showed her to do something in one application would not translate at all to a different application or to a different context, like manipulating files. This is challenging in the extreme when you have no way of knowing that the context has even changed because you don’t have a mental model for the thing you’re looking at. The number of controls available in any given app should be stripped to the minimum, so there’s less to remember; the number of contexts (app vs app vs system) stripped to the minimum so there’s less to remember; and the variances between contexts (different control in different contexts) stripped to a minimum so there’s less to remember.
  • Fullscreen everything: That apps need to be opened or closed may even be an unnecessary metaphor. If every app took up the whole screen, was open all the time, and there was an ever-present mechanism to switch between them—then that’s a few more things that don’t have to be remembered. We could reduce the cognitive burden down to: which of these dozen things do I want to do right now/next -> select.

Mobile interfaces are moving in the right direction.

If I put my product hat on and make my grandmother the target user, what she really wants out of a computer comes down to a managed communications experience which empower her to:

  • Get in touch with the family and friends easily. Contacts as actions, the faces of the people she wants to contact as buttons on a screen that get in touch with them via video, phone or text. We, as relatives, need a way to remotely keep those contacts up to date via push to her device or a centralized service that propagates to her device.
  • Keep up with loved ones when we’re not talking. Facebook without the Facebook, a timeline of updates from loved ones, pictures and videos and text, shared directly to her device, in a single app, blown up full screen. A feed that any of us can push content to or that can consume and present content from things like Facebook.
  • Have important information and reminders without having to look for it. Emergency and medical information as collaborative app, pushed to the device by doctors and loved ones for consumption by all parties involved in care, including her for things like “Hey it’s 10am, take the blue pill!”.
  • Let loved ones help. Shared calendar that loved ones and caregivers can push events onto, like appointments and birthdays. Delegation of control for all apps and services so she can say to her banking app that I am designated to make sure her bills get paid. Or, so I can have an Uber pick her up to take her to the airport and have the notifications go to her device instead of mine. Or, so a caregiver cantake over her device and it’s capabilities (like the camera) and show her things on it remotely or check in on her.
  • Stay in touch with the world. News and entertainment, in one of the languages she understands, including: newspapers, streaming tv and movies, and games. The usual stuff that everyone enjoys. ☺

Why this doesn’t exist is beyond me. There’s a fortune to be made for someone with the single-mindedness to build interfaces for people who are older or didn't grow up with computers or lack our cultural metaphors or have zero exposure to computers outside of phones etc. 

unicorns and the language of otherness

Because even in the face of overwhelming evidence, people will come up with excuses for why they should not, will not, can not—learn or change.

Presented at Velocity NY 2014.

Transcribed:

This man is albino, which means he has no skin pigmentation.

The red you see is the blood below the skin. His name is Brother Ali. He is a muslim rapper from Minnesota. That makes him different from all of us, in some way. And in all likelihood, we don’t think like him.

Let’s say that I believe the earth is flat. It’s part of my identity. It’s a strong belief. I have convictions around it, decisions that I’ve made around it. I identify as an earth-is-flatter. My identity is invested in the earth being flat. An attack on the idea is an attack on me. If the idea is wrong than I am wrong. Personally. Not just about that one thing, but about my person.

Let’s say you believe something different. You believe that the earth is round. You’re an earth-is-rounder. That makes you apart from me. Not because you have a different idea, but because you have a different identity. I cannot identify with you. If you’re successful in your belief, then maybe my way isn’t the only way. If you’re more successful than I am, then maybe my way isn’t the best way. If you are successful and then I am less successful, then maybe I’m wrong. But I’m not just wrong about the idea, I am wrong as a person.

But, I don’t have to see that. I don’t have to see anything. I have labeled you as something other than me. I cannot identify with you, thus I do not have to see your success. I can ignore it. I can bury my head in the sand. My ingrained belief creates a bias about you that I have. And I rationalize that bias by calling you something else, by putting a label on you. 

There is a saying by our friend, Brother Ali, that we have a “legacy so ingrained in the way that we think that we no longer need chains to be slaves.” He’s taking about racial biases. but any ingrained way of thinking creates a bias. Biases pile up and compound into a kind of psychological debt. It’s like technological debt: you have to refactor it in order to move on. It will eventually slow you down, bog you down, prevent you from seeing things. Prevent you from noticing thing. Prevent you from seeing a thing you might want to learn. 

And what’s true of you as an individual is true of us as groups. Teams can have shared biases created by their entrenched ideas and ways of doing things that create a shared psychological debt that prevent them—not just from learning—but from seeing that they should be learning. And while they are not learning, while we are not learning, there are other people who have learned and through their learning have changed the world around us. 

I was an analyst at Gartner for a couple of years and I heard this all the time: - “These companies are not like us. They do things differently. They have different users. They have different environments. They can do whatever they want. They don’t have the same security concerns we do.” Any litany of excuses that say “we don’t have to learn from them because they are unicorns” and unicorns are different and different people are others. So, eh. It’s ok. 

Turns out that unicorns are just people. And as people, they’re just like us. They’ve just made a different set of decisions in a different context in a different environment. We can make different decisions. We can create a new context. We can pay down our psychological debts. We can even declare bankruptcy like people do with economic debt and start over, throwing out ideas and practices. 

Cause the thing is, if we really want to move forward and expand and learn and grow and change for a changing environment—we have to get past the mess of our past decisions. We have to separate our identities, who we are and who we will be, from who we were, what we have done and what we have been. So that when we encounter something different or see change, or see change in others, that is not a threat to our identity and it doesn’t hurt so much to accept change and to do change. 

I don’t want to be a unicorn. I don’t want to be someone who is apart from you, other than you, does not have to be listened to, can be dismissed. And I don’t want to think of anyone else as something special, apart, different, cannot be learned from, to be dismissed, not part of the same humanity that I’m in. 

Cause, in the beginning and in the end, we are all still people. Thus, mainly in essence the same. The fact that we have some simultaneous differences, that have evolved, that don’t cause us to die out there in the world—suggests that the single strongest signal that you have something to learn is the fact that a difference exists. 

..the single strongest signal that you have something to learn is the fact that a difference exists. 

the web in twenty minus five

My friend Stu tagged me to answer these questions five years ago:

  • How has the Web changed your life?
  • How has the Web changed business and society?
  • What do you think the Web will look like in 20 years?

Here are my answers, with minor edits and some commentary in []'s. The original post is here.

---

Ok, but 1) I don’t think I have anything unique to say and 2) we’re all wrong about what it’ll look like in [15] years.

How has the Web changed my life?

It’s strange to talk about the web as if it is the internet. I grew up in the late 80’s through the 90’s along with the emergence of the web as the dominant realm of the net. When I first connected, it was all about email, usenet, irc, and bbs’s. “Web” was an afterthought. Overnight, pretty much, it become the primary interface to the net. And then, the primary platform.

It’s the platform part that has impacted us most. My life is enriched by unprecedented access to commerce (amazon! threadless! zappos!), content (youtube! hulu! gutenberg!), people (facebook! twitter! linkedin!), and publishing (blogs! twitter! tumblr!). The last two have mattered most to me.

[I’ve gone from a nobody buried inside of IBM to a very-minor-somebody embroiled in #startuplife.] 

How has the Web changed business and society?

First off, we’re not talking about all businesses or all societies—really only a minority of either that are the majority in most of our spheres. There are plenty of people who could use some very simple, basic necessities that the web can’t supply. [But the internet has had an impact—see M-Pesa.]

So business and society: web as platform for connecting, producing, publishing, consuming, and trading on values. It’s a lever with a positive multiplier effect on reach and a negative multiplier on cost of achieving that reach.

The web has created a whole field of startups that require next-to-nothing to get going. It’s given a whole slew of people who would’ve once just been company (wo)men an alternative.

It has created a way to organize and collaborate that’s enabled everyone (good, bad and ugly) to come together with others along any lines for any reasons with however much anonymity for however long on any terms.

What will the Web look like in [15] years? 

  • It will remain a platform with immense multiplier effects
  • Web/desktop/here/there/os/app/interface lines will [blur for users and] only exist to the technology plumbers and enthusiasts
  • More embedded and ambient devices creating new interaction points
  • More ambient triggers for sensors and sensing
  • Touch and voice as natural interfaces
  • Physically responsive interfaces enter the real world
  • Consolidated [but distributed] virtual identities
  • Won’t solve poverty
  • Won’t solve despots/theocracies/totalitarianism/etc [but It’s certainly thrown a wrench in the works]
  • Won’t solve disease
  • Won’t solve people hell bent on destroying other people
  • Won’t solve exuberant-irrationalism