Just thought I'd summarize what I've learned so far ... a particular slide stood out for me....
Box in white around everything is the service and everything outside of it is the public internet.
LB is a load balancer which takes input off the public internet and feeds it to the webrole
‘N’ = as may copies of the webrole you want all distributed behind the load balancer.
This is the only way that input is coming into the system.
No one in the world can hit and IP address of one of your boxes directly.
They only come in through the load balancer and hit one of the published endpoints.
The webrole(s) talk to the cloud storage which is also a service of windows azure.
The dotted edge between the webrole and the cloud storage means this is not going out over the public internet.
This dotted arrow is within the same system that’s hosting your data and running your code.
So there is some level of co-location with the benefits of lower latency, higher throughput less communication costs.
The worker role,
You can have as many instances of that as your want.
‘M’ = as many copies of the worker role you want.
Worker role is seen here as talking to the cloud storage which you can think of as an infinitely big bus of data.
The dotted edge between the worker role and the cloud storage means this is not going out over the public internet.
The dotted arrow is within the same system that’s hosting your data and running your code.
So, again, there is some level of co-location with the benefits of lower latency, higher throughput less communication costs.
You can think of the worker role as anything with a main() method, that method running in a loop.
It just pulls stuff off the queue.
Notice that there is nothing coming into the worker role from the edges (public internet) or anywhere else... no one is hitting it.
The cloud storage box has a bi-directional arrow out to the internet, this means the cloud storage system is accessible from anywhere... it has REST APIs.
This means that you can store and retrieve data from anywhere giving you some flexible architectural choices for creating solutions and extra flexibility when debugging and viewing the data.
You don’t have to host code in the cloud to access the storage system, you can be running on-premise or anywhere that has access to an internet URL.
WebRole and Worker role both have a unidirectional arrow out the internet.
This means you can open up a TCP connection to anywhere from here.
To any box in the world.... you can talk to anything you want however you want.
You can talk to on-premise or cloud based services.
This picture represents best practices architecture for building cloud services that scale.
You have a bunch of stateless compute nodes... any of the machines can fail at any time and your service is still going to be up.
Stateless Compute + durable storage
Scaling out instead of up
Loosely Coupled Architectures with queues for asynchronous processing.
Increased accessability and architectural flexibility.
All in a pay as you go/pay for usage model without upfront capital expeditures
Leveraging existing .NET skills and toolsets.