One of the revolutionary changes sparked by the advent of cloud computing was that businesses could gain access to a plethora of services that they would not normally be able to cost or manage themselves.
This was not just a case of being able to rent super computers, whereas before if you wanted super processing power you had to build the big beast yourself, which in itself, put it outside the budgets of most firms. It was much more than that.
It was (and is) about having access to multiple machines, creating powerful computing clusters that most business wouldn’t be able maintain and afford. Not to mention the costs associated with having the manpower, space and power concerns involving the machines themselves.
However, if you could rent it and stream it from the cloud as a service, then advanced technologies like Big Data Analytics fell within the operational scope of many companies.
Amazon AWS Lambda
Amazon’s AWS Lambda is another node in this value chain: it’s an event triggered technology where AWS clients can draw on cloud services and resources but are only charged for what they use because usage is metered in 100 millisecond increments, resulting in a much more accurate billing.
It also scales automatically, from a potential starting point, according to Amazon, of “a few requests per day to thousands per second”. The new technology was launched at the AWS re:Invent last month. It supports code in node.js.
Lambda is a zero-administration compute platform. You don’t have to configure, launch or monitor EC2 instances. You don’t have to install any operating systems or language environments. You don’t need to think about scale or fault tolerance and you don’t need to request or reserve capacity.
Jeff Barr, Chief Evangelist at Amazon Web Services
Push Pull: How Lambda works
Lambda is designed to be easy to use. The user uploads their customised code (written in Node.js) to AWS Lambda. After it’s uploaded it’s called a function. This Lambda function is a zip file with libraries dependent on the users’ code and configuration data.
The events that trigger Lambda could be a jpeg uploaded to the cloud or even just a web click. Once this occurs the service starts within milliseconds of the event, which to the user will be instantaneous service delivery if everything else in the network is in order.
In AWS everything starts with a bucket on the Simple Storage Service or S3, every object on S3 has to be contained in a bucket. The bucket is configured to generate a notification when something is added to it. This notification is pushed out to the S3 which in-turn publishes an event to AWS Lambda. This is the event trigger. At this point Lambda executes the client’s function code.
AWS official documentation calls this the “push model” where S3 triggers Lambda to execute the code function. A user defined app can also produce the event that triggers Lambda’s code execution.
The “push model” implies the existence of a “pull model” and AWS doesn’t disappoint. This operates by Lambda pulling events from an Amazon DynamoDB stream, it will contain change logs on the DynamoDB table, or an Amazon Kinesis stream which publishes events from a custom application. Lambda executes code according to the order by which events are pulled from the stream.
The user needs to give Lambda permission to pull from the stream and execute the function. It executes the code and uses AWS services only if it has been given rights to access the resources, and can only be given those permissions by the AWS account holder. They are given to Lambda via the IAM or execution role.
But Technology commentator, Hunter Kelly, says that the distinction between push and pull in this model is not clear cut. Speaking to Verify he said: “If your Lambdas are getting data from, say, Dynamo DB, [this is] only when an event triggers your Lambda to run, this is still, to me, essentially a push model; your code gets notified of a relevant event (even if that notification is simply the fact that your code is running) and then it pulls the data from the appropriate source”.
Another feature of Lambda is the degree of automation it offers AWS users. In contrast to Amazon EC2, Lambda looks after the provisioning of instances; deploying security updates; managing front-end web services; monitoring the compute fleet for any signs of dysfunction and monitoring code functions.
The event prompts Lambda to run the code in the function and search for free resources in the cloud, the function is stateless, this means there are no configuration or deployment delays with the Lambda function, the code is run in milliseconds so the users shouldn’t experience any delay.
Scaling, of course, is crucial in any cloud business and this is done automatically for users by Lambda, it can launch as many copies of the code as needed. If the user chooses the appropriate amount of memory, then the service will allocate a proportionate amount of complementary resources.
This is faster, cheaper cloud, and it’s an example of how AWS keeps out in front of the cloud industry. As of last month, according to AWS boss, Andy Jassy, the cloud service has just passed the 1 million customer mark, this includes private and public sector clients. It’s been successful in pulling in business and volumes are going to be important because in the same Wall Street Journal interview Jassy said: “This space is going to be a high-volume, relatively low-margin business.”
So Amazon has to distinguish its offering by cost effectiveness and ease of use to keep those customers coming. One thing it will have taken away from the development of the tech market over the last 25 years is that it can’t be complacent. It needs, as its recent conference name suggested, continuous re-Invention.