Automatically Assign Elastic IPs to Elastic Beanstalk Instances

I've had a need for a while to automatically assign elastic IP addresses to newly launched Elastic Beanstalk instances. This is super useful if you're using an external database host like Compose, and you only want to allow connections coming from your app instances.

But when elastic beanstalk brings up a new instance it assigns it a random IP address. The script below solves this, by triggering a lambda function with a Cloudwatch rule when a new EC2 instance is launched.

Note: This script assumes you have a pool of elastic IPs – it doesn't provision them.

1. Create a Cloudwatch Rule

Create a cloudwatch rule that runs when an EC2 instance transitions to a "running" state, and have it trigger a lambda function.

2. Create the Lambda Function

Paste the following code into a lambda function:

const AWS = require('aws-sdk');
const ec2 = new AWS.EC2();
const PROD_ENV_NAME = 'my-prod-env-name';

// Example Event
// {
//   "version": "0",
//   "id": "ee376907-2647-4179-9203-343cfb3017a4",
//   "detail-type": "EC2 Instance State-change Notification",
//   "source": "aws.ec2",
//   "account": "123456789012",
//   "time": "2015-11-11T21:30:34Z",
//   "region": "us-east-1",
//   "resources": [
//     "arn:aws:ec2:us-east-1:123456789012:instance/i-abcd1111"
//   ],
//   "detail": {
//     "instance-id": "i-abcd1111",
//     "state": "running"
//   }
// }

exports.handler = async (event) => {
  console.log("EVENT:", event);
  
  // The newly launched instance ID.
  const instanceId = event.detail['instance-id'];
  
  // Fetch info about the newly launched instance
  const result = await ec2.describeInstances({
    Filters: [ { Name: "instance-id", Values: [instanceId] } ]
  }).promise()
  
  // The instance details are buried in this object
  const instance = result.Reservations[0].Instances[0];
  const isAttached = instance.NetworkInterfaces.find(int => int.Association.IpOwnerId !== 'amazon');
  
  // Bail if the instance is already attached to another EIP
  if (isAttached) {
    console.log("This instance is already assigned to an elastic IP")
    return { statusCode: 200, body: '' }
  }
  
  // In elastic beanstalk, the instance name gets assigned to the enviroment name.
  // There is also an environment name tag, which could be used here.
  const name = instance.Tags.find(t => t.Key === 'Name').Value;
  
  // Only assign EIPs to production instances
  if (name !== PROD_ENV_NAME) {
    console.log('Not a production instance. Not assigning. Instance name:', name)
    return { statusCode: 200, body: ''}
  }
  
  // Get a list of elastic IP addresses
  const addresses = await ec2.describeAddresses().promise();
  
  // Filter out addresses already assigned to instances
  const availableAddresses = addresses.Addresses.filter(a => !a.NetworkInterfaceId);
  
  // Raise an error if we have no more available IP addresses
  if (availableAddresses.length === 0) {
    console.log("ERROR: no available ip addresses");
    return { statusCode: 400, body: JSON.stringify("ERROR: no available ip addresses") }
  }
  
  const firstAvail = availableAddresses[0]
  try {
    // Associate the instance to the address
    const result = await ec2.associateAddress({
      AllocationId: firstAvail.AllocationId,
      InstanceId: instanceId
    }).promise();
    
    console.log('allocation result', result)
    
    return { statusCode: 200, body: JSON.stringify('Associated IP address.') };
  } catch (err) {
      console.log("ERROR: ", err);
  }
};

Whenever your elastic beanstalk environment launches a new instance, the above lambda function will get hit, and if there's an available elastic IP address, it will assign it to the new instance.

#Work Journal, aws, devops

Figma for X is Overblown

My friend Jeff posted the following a couple days ago:

Figma is an amazing tool, but what makes it so disruptive is it included a long-ignored part of the design processcollaboration — into design software.

By the time Figma came on the scene, the design tool space had been bursting at the seams for years specifically because of this problem. Entire companies were built (Invision, Flint) essentially to solve this one problem. Sketch kicked off a rethinking of what was possible in a post adobe world, but it left collaboration woefully unaddressed. You needn’t have looked any further for opportunity than to observe the myriad ways design teams had invented to share and collaborate on design files.

Real-time collaboration was Figma’s initial wedge into the market, and now having invested heavily in their real-time infrastructure, it’s their moat.

But for many businesses, real-time collaboration just isn't a core part of their function; it’s a fun feature, a nice to have. SaaS business focused on inherently non-collaborative functions won't magically discover a biding need for collaboration.

Will a payroll manager develop a need to collaborate with other colleagues in real-time in order to do their job? A data scientist? Business analyst? A sales rep? Sure, the ability for multiple people to make edits to the same lead in real-time would be cool, but these functions aren’t suffering without it.

Which means, if you’re building software for one of these non-collaborative functions, whether or not your product has real-time collaboration just doesn't matter, on the margins. It won't set you apart in any meaningful way from your competitors. It won’t win you customers.

If one CRM vendor has real-time collaboration features and the other doesn't, is that likely to create a category winner for one company and not the other? Probably not.

There are plenty of product functions where real-time collaboration is core to the experience: document editing (Google Docs, Quip, Notion, Airtable), project management (notion, trello). The problem is, many of them already have category winners.

#product, Writing

A Few Products I Wish Existed

This is a scratch pad of product ideas that have passed in and out of my mind that I can't seem to shake. I wish they existed, but (to my knowledge) they don't. I wish someone would build them.

A better blog distribution platform

High Level: Take some of the best ideas of rss, reeder, old-school blog aggregators, Pocket, layer on some social signals, etc. It's a distribution platform that connects readers and writers.

Why? The pendulum is swinging back from centralized closed publishing platforms (Medium) and distribution systems (social networks) to personal blogs. Unfortunately, the blog distribution ecosystem kinda died in the last ten years.

In the last decade or so, web publishing shifted its distribution model from RSS and the ecosystem that was once built around that – feed readers, aggregators, etc – to social networks, and to a lesser extent, email. Both channels have a signal to noise ratio not even worth discussing.

Everyone – individuals, indie media, and even the corporate media – is exhausted with the current state of things. Native mobile failed. Publishers are fed up with Google and Facebook. Email is a zoo. Thus, independent web publishers – personal blogs, company blogs, and even independent media – need a new distribution mechanism.

What is it? Imagine an interface kind of similar to reeder. It stores your subscriptions (RSS if it exists, something more crude if it doesn't). Just like in any feed reader, when someone you subscribe to publishes, it shows up here.

You can choose to connect your social networks to surface social signals and suggestions. Who are your Twitter friends subscribing to? Check out this article someone you engage with a lot on Twitter just shared. Additionally, the system will suggest new things to read based on your existing subscriptions.

For publishers, it's a low-cost, effective, efficient distribution channel.

For readers, it's an easy way to track and stay up to date with the sources that matter to you without getting sucked into the social media vortex.

What it isn't. It's not a fucking social network. No social graph, no engagement actions. It's a private experience. You subscribe to stuff, you can bookmark stuff.

It's also not a publishing platform, it's not Medium. It's just distribution. Content lives on the source website. There are lots of companies doing web publishing in different forms for different models. Medium has experimented with several different business models at this point. Substack is doing something like a patreon for email newsletters.

But these are publishing models. They host your words and probably provide some tools to writers/publications. The thing I want to exist is not that, and it shouldn't be burdened with building publishing tools.

It should just handle distribution: connecting readers and writers.

It should also not force the reading experience to happen on-platform. Sometimes, it's nice to put everything into a feed reader and read everything from one place. Especially if you're in an internet-challenged environment. But for other things, I want to read things where it's published. I want the design and typography the author intended. I want the images to show up the right way. A lot of that gets mangled in RSS. Such a platform should support both models.

Business Model: This would be a paid service to publishers (at a very low monthly/annual fee), thus avoiding the inevitable slippery slope of the ad model wherein you eventually throw your users under the bus, one way or another. Maybe it starts at $5/mo for tiny publishers and it scales up with your subscriber base.

Maybe at some point you charge readers for some premium services? Like if you're tracking 1000 subscriptions, maybe you should pay a few bucks a month.

A Deep Work Planner, Scheduler and Tracker

Reading Cal Newport's Deep Work transformed how I think about work. Unfortunately, the process of doing and tracking deep work feels difficult.

What it is. A high-level weekly planner that makes it easy to plan and track deep work sessions. As part of this there would probably be a deep work journal so you can keep a log of your work sessions, as laid out in the book. After each session, you reflect on the session – did you accomplish you goals? Why/why not? Each session would be attached to a task, and a task attached to an estimate, so you can get better over time.

What is isn't. It's not collaborative, for sharing or planning tasks with a team. It's just for you.

A Personal Relationship Manager

Make it easy to stay in touch with the people I care about staying in touch with. Currently my contacts live on my phone and sync to gmail contacts and somehow this is still unbelievably hard.

A better running/training app

I run a lot outdoors. It's my primary exercise and form of training. Often I want to do different kinds of training. Sometimes I want to do intervals. Sometimes I want to target a heart rate zone. A lot of the time I want to just do a few miles of free running. I want something that makes that easy to plan and track with my watch.

The killer feature with this would be to plug into Spotify and track my pacing to the songs I'm listening to while running, and then rank the songs by my run performance. Automatically create playlists based on the best performing songs. Automatically build them based on the length of my runs. There's so much potential here.

A better way to track your career and skills

I want a living CV, tied into github, that figures out what sort of things i'm working on and mines useful insights. the business model here would recruiting. but good, privacy-sensitive recruiting, that respects your inbox and time. companies and recruiters would have access to a level of data not currently available, knowing your true expertise rather than the bullet points of what you decide to put in a resumé.

#Writing

A Simple React Date Picker Component

None of the existing open source react date picker components quite fit my requirements (mostly too bloated with too many dependencies), so I decided to see if I could quickly hack one together. This took a couple of hours, was easier than I thought it'd be, and meets my needs quite well. Most importantly, it's tiny (~100 LOC) and free of the complexity of many of the existing solutions.


*Cross-posted on https://dev.to/ajsharp/a-simple-react-date-picker-component-3216.

#Work Journal, react

How to Disable Specific Eslint Rules

The create react app eslint presets come with a few rules that are slightly annoying. One of them is jsx-a11y/href-no-hash, which makes sure you don’t add an <a> tag without a valid http address for the href property.

To ignore this, add a .eslintrc.js file at the project root with the following:

module.exports = {
  "parser": "babel-eslint",
  "env": {
    "browser": true
  },
  "rules": {
    "jsx-a11y/anchor-is-valid": "off"
  }
}

Then make sure to reload the vscode window.

Full eslint config file documentation here.

#Work Journal, react, todayilearned

The Hilarious Fragility of NLP APIs

Recently I've been working on functionality in Follow Reset that requires machine learning and natural language processing, so I've been experimenting with two well-known NLP APIs: AWS Comprehend and Google Natural Language. While my primary interest in these API's is for their custom modeling capabilities, I was curious to see what kind of quick results I could get from their basic entity recognition and categorization functionality.

The high-level product goal is simple: use Twitter bios to extract and detect high-level categorical information about people.

My main test case is the profile description of Joe Rogan, a very well-known comedian and podcaster with 4.68M Twitter followers (as of writing).

My (naive) hope was that these APIs would be able to extract from this description that this person is Joe Rogan, who is a comedian.

The results were...uh...surprising, to say the least.

😕 Initial (odd) results

I used Joe's Twitter bio as the input into both APIs:

Stand up comic/mixed martial arts fanatic/psychedelic adventurer Host of The Joe Rogan Experience #FreakParty http://www.facebook.com/JOEROGAN

AWS Comprehend recognizes Joe Rogan as a person. Good start. AWS has a feature called key phrase extraction, that unfortunately in this case, doesn't add much context and is in general pretty useless here.

Google, on the other hand, doesn't actually recognize "Joe Rogan" as a person, though it identifies both "Host" and "mixed martial arts fanatic" as a Person entity. Odd.

While Comprehend's entity results tend to be more factual, (Joe Rogan is a person) Google's results attempt to provide context, identifying direct contextual entities in his bio like "adventurer", as well as a Wikipedia article about the Joe Rogan Experience - his highly popular podcast - context not contained in the bio.

One thing that jumps out here is that while Google identifies categorical context -this is probably the description of a comic - it is unable to properly identify the Person entity to which the comic category refers. Considering Google's ability to detect and suggest context that is external to the input string, it's a bit of a head-scratcher that it fails to correctly identify the Person entity.

🔑 Changing the Inputs, Content is Key?

What happens if we change the name from Joe Rogan to Alex Sharp?

Here's the updated input string:

Stand up comic/mixed martial arts fanatic/psychedelic adventurer Host of The Alex Sharp Experience #FreakParty http://www.facebook.com/AlexSharp

Amazon's still sees Alex Sharp as a person entity. Cool. I am that. 🙋‍♂️

Google's results are…unexpected. Somehow, Google is more confident that Alex Sharp is a Comic than Joe Rogan. Uh, sure 😂

🙃 joe rogan is not Joe Rogan?

What happens if you de-capitalize Joe Rogan to joe rogan? When we do this, Amazon no longer recognizes the person entity. Google still thinks we're probably talking about a comic, but not by much.

What happens if we substitute the name for a fictional character, or if we change the capitalization of some of the words around the name? Here I've changed the name from Alex Sharp to Ronald McDonald.

Both APIs seem to have a much easier time recognizing a Person entity if it's not surrounded by other capitalized words.

Amazon recognizes both forms but has a much higher confidence level when Ronald McDonald is not surrounded by a capitalized word.

Google's results are more stark, failing to recognize a Person entity at all if the term is surrounded by other capitalized nouns: it recognizes the whole phrase Ronald McDonald Experience as "Other". It thinks it's something, but it doesn't know what.

While these results make logical sense to a layman - capitalized words are often proper nouns - it's a bit disappointing that these products rely on such basic and fragile grammar rules.

☝️Massively Unqualified Advice to Amazon & Google

It seems to me (enormous caveat: I'm a complete ML/AI amateur) that the person entity recognition issue could be improved by identifying names based on whether they match (or don't match) dictionary words. Neither the words joe or rogan are dictionary words, so can we reasonably assume, with mild confidence, that paired together, joe rogan might be someone's name? I dunno, I'm way beyond swimming in the deep end of a pool I barely understand 🤷🏻‍♂️.

🧙‍♀️🚫🦄 No Magic Here

As we can see, these APIs can be incredibly fragile, often reacting in odd and unexpected ways to tiny changes in input.

They seem to operate around fairly rudimentary grammatical "rules" - proper nouns must be capitalized, proper nouns must exist on their own - which are in many cases incredibly fragile, especially considering that many of us aren't exactly writing on the internet and social media in academically sanctioned grammar. Change an uppercase letter to lowercase and the whole thing breaks, and poor Joe Rogan is robbed of his personhood. Sad. 😟

Outside of this largely underwhelming but head-scratching exercise in curiosity, there is some truly mind-blowing work happening in deep learning powered machine learning and NLP. Unfortunately, AWS Comprehend and Google AutoML feel more like the training wheels version of a technology that can literally type words before we've thought of them, beat humans at complex strategy games, drive cars, and more. Unfortunately, these APIs are pretty underwhelming for anything other than basic grammar categorization and sentiment analysis (not covered here).


🙏 Thanks for reading. If you're interested in learning more about the product that inspired the work for this post, check out and subscribe to Follow Reset, which will soon make it very easy to clean up your Twitter feed.

#nlp, machine learning

React CRA + Netlify = 💯❤

I've been shipping static frontend to s3/cloudfront for a while, and I finally gave Netlify a chance and wow is it easy. You connect your github repo, give it a command to run and done. They handle all the cloudfront / CDN stuff behind the scenes that I used to have to configure manually.

See here for more.


🚀🚀🚀 Thanks for reading my development journal, where I write about things I learn while building products.

The product I'm currently building is Follow Reset, which makes it easy to clean up your Twitter by helping you prune who you follow. Subscribe here to get notified when it launches, or head to the website to learn more. 🙏🙏🙏

#Work Journal, react, todayilearned