I would like to share my latest creation. PiNAS is and attempt at making small home server based on Raspberry Pi 3B+.

Main components:

  • Raspberry Pi 3 Model B+
  • Raspberry Pi Universal Power Supply 5.1V 2.5 A
  • Ultra-thin Aluminum Alloy CNC Case Portable Box Support GPIO Ribbon Cable For Raspberry Pi 3 Model B+(Plus)
  • WD Elements 2.5” USB 3.0 4 TB
  • SanDisk Ultra microSDHC UHS-I 16 GB

Raspbian Stretch Lite is the choice of OS.

The easiest and clean way I’ve found to transfer the root partition from the SD card to the HDD: How to Boot Raspberry Pi from USB Hard Drive

Previously I’ve used 3B and had no problems booting from the USB HDD. 3B+ is a different animal. I’ve spent hours researching why it did not booted directly from the HDD. Finally came across this article: Raspberry Pi boot modes

So I have to keep the SD card with fresh bootcod.bin on it at boot time. Afterword’s the card is not used as my root partition is located on the USB HDD.

I’ve tried this first: How to boot from a USB mass storage device on a Raspberry Pi but it did not work! Than this: Raspberry Pi 3 USB booting
No Luck!

In this article it is explained Why in a very detailed manner: Pi 3 booting part I: USB mass storage boot beta

After that everything works as expected. Now on boot Pi reads the boot partition from the SD card and continues from the HDD.
In contrast my 3B just reads everything from the HDD and there is not SD card needed.

Now is time to install Docker and Docker-Compose: How to run docker and docker-compose on Raspbian

Note this is my first 3B+ model. Previously I’ve used 3B and had no problems booting from the USB HDD. 3B+ is a different animal. I’ve spent hours researching why it did booted directly from the HDD. Finally came across this article: Raspberry Pi boot modes

So I have to keep the SD card with fresh bootcod.bin on it at boot time. Afterword’s the card is not used as my root partition is located on the USB HDD.
I’ve tried this first: How to boot from a USB mass storage device on a Raspberry Pi but it did not work! Than this: Raspberry Pi 3 USB booting No Luck!
In this article it is explained why in a very detailed way: Pi 3 booting part I: USB mass storage boot beta

After that everything works as expected. Now on boot Pi reads the boot partition from the SD card and continues from the HDD.
In contrast my 3B just reads everything from the HDD and there is not SD card needed.

Now is time to install Docker and Docker-Compose: How to run docker and docker-compose on Raspbian

Adding Samba and Deluge was quick and easy.

As inspiration of this build and post I would like to tank to the creators of the following posts:
Raspberry Pi Home Server – really liked the UPS and Dynamic Fan Control.
Create a hardened Raspberry Pi NAS – some of the configurations came quite in handy.

PiNAS 2.0
As you know Raspberry Pi 4B was announced last week. It will provide 4 GB RAM, Real Gigabit Ethernet and 2 USB 3.0 Ports. Hope I’ll be able to attach two USB 3.0 HDDs and maybe build a software RAID.  For the moment I’ll have to wait as even USB HDD Boot is not implemented in Raspbian Buster. With a bit of luck in a couple of months all will be fixed and working.

Until then my setup provides almost 20 MB/s throughput over Samba.

Microsoft Certification Paths for Azure and Microsoft 365 in 2019

Vlad Catrinescu posted a few very useful article in relation to the new Certification Exams. I would like to share them here.

Microsoft Certification Paths for Azure and Microsoft 365 in 2019

Teamwork Administrator Associate

He also made the effort to put together study guide for the Teamwork Administrator Associate exam:

Teamwork Administrator Associate

Update 02.06.2019:

Check this great presentation listing all Microsoft Training & Certification info.

Ode to ThinkPad

My first ThinkPad was T61. The keyboard was sublime, performance outstanding! Throughout the years I’ve worked on many T, W, X series. I’ve been using most if not all top laptop brands. But still ThinkPad is my favorite.

In this article I would like to share my experience with my current machine.

Lenovo ThinkPad T420

I’ve bought it about 2 years ago second hand for the decent price of 200 Euros. It was in “like new” shape. With the best screen for the model 14,1″ HD+ 1600×900, 220 Nits and discrete GPU NVIDIA NVS 4200M, 1GB VRAM, Optimus. Straight away I’ve added Samsung 860 Pro SSD in a caddy in place of the optical drive for the price of 100 Euros. Unfortunately, the laptop came with as little as 4 GB RAM. I’ve fix that with a pair of Kingston 8 GB RAM sticks for the price of 150 Euros. To enhance performance, I was able to plug one mSATA Samsung 860 EVO SSD for 75 Euros. Combined with old HDD that is kept for storage only.

Now the performance was almost acceptable. Last thing was the CPU. This machine is not only one of the last generations with the so much prized keyboard and pad configuration, but the CPU is sitting comfortably in a socket. This means that you can easily change it like in any desktop computer. My idea was to put as powerful CPU as possible. As dual core i5 Intel didn’t have enough juice.

There are places on the net dedicated for people that love their Think Pads.

Good starting point is ThinkWiki, page with all kind of useful information for each model:

Here you can see my model support some quad core i7s. Comparing the specs my choice fell on

Intel Core i7-2760QM (2.40GHz, 6MB L3, 1600MHz FSB)

It has all the capabilities of my old

Intel Core i5-2520M (2.50GHz, 3MB L3, 1333MHz FSB)

Plus, twice more cache and cores. I’ve fished it out quite quickly on the online markets.

As you can see in the wiki this CPU is officially unsupported and consume much more Watts than what the system is designed to provide. But it works!

Reading through the following articles build my confidence that it will be stable and will be cooled properly:

One last thing is that you must flash cooked version of your bios. The best that I could find is this:

Removing some limitation and gives you much more control over the inner workings of the hardware.

I’m very grateful to all members of these forums that shared valuable information and experience!

What a surprised it was when quickly the machine started to overheat. Hitting the 100 C in seconds. Intel CPUs have a nice feature called Turbo Boost. From my understanding this is an overclocking feature. For example, my CPU works on 2.4 GHz. Under heavy load the multiplier is shifted up and some or all cores reach 3.2 GHz. The power consumption hits the roof, heat too. Then inevitably comes the dreaded thermal and power throttling.

After fiddling with some settings in ThrottleStop. My decision was simple. Equipped with my new bios, just turned off the Turbo Boost. This limit the frequency just to the stock one. In addition to lower the power consumption I’ve switched from the build in GPU in the CPU only to my discrete one.

Et Voila!

Thermal and Power Throttling disappeared!

My build is completed!

RAM – Maxed!

HDD – Maxed!

CPU – Maxed!

GPU – Maxed!

Display – Maxed!

Other noteworthy articles:

Lenovo ThinkPad T420 Upgrade Project


As nice accessory I have ThinkPad Mini Dock Plus Series 3 that cradles the laptop nicely.

Currently working on upgrading to Windows 10 from Windows 7. As Lenovo does not support this OS the biggest challenge that I find here is with drivers and specifically the compatibility issues of NVIDIA Optimus.

Upgrade to windows 10

For warm up I’ve run few test installs and upgrades just to see how it goes. Windows 10 upgrades from Windows 7 pretty good. Clean Installs hit the issues with the NVIDA Optimus. Setup hangs and the only wat to continue is to use the build in GPU in the CPU. If I switch to the discrete one Windows hangs. My decision here was to go the upgrade path instead of the clean install. It transfers nearly all my applications and settings also, use most of the drivers and works generally well for me. As Windows 10 is not officially supported on T 420 and I don’t want to hunt for driver updates manually I’ve used Driver Genius. Also look at Driver Booster but as previously had experience with the former it wins. Honestly, I don’t want an app that will offer me new drivers just because it is expected to prove it’s value.

After the migration my default Administrative Shares were gone. Have to modify the registry as described there to make them work again:

How To Enable Remote Access To Administrative Shares in Windows 10

A pretty good explanation of the Optimus settings can be found here:

How do I customize Optimus profiles and settings?

Please note that you can choose a graphic processor per application. I had some crazy problems with Office 365 apps that were just a black screen in RDP. It took me some time to figure out that 32-bit application does not paly well with Optimus. I’ve had to reinstall off Office application 64-bit. This fixed the issue and now all applications work fine over RDP.

After the migration Visual Studio 2019 gives me some errors but after repair all is fixed up.

I needed Docker Desktop mostly for windows containers. It was huge and unpleasant surprise that VMWare Workstation stopped working after Docker Desktop installation.

The error is very well described here:

VMware Workstation and Device/Credential Guard are not compatible

As Docker Desktop uses Hyper-V, I have to sacrifice VMWare Workstation. It was hard decision and but for the moment containers outweigh VMs. Hope in future I will be able to use VMWare Workstation and Docker Desktop (Windows Container) in parallel.

In summary my laptop is up to date with maxed hardware specs and latest OS and applications. I’m pretty happy with the result and the performance. It totally was worth the time, money and effort for updating this machine. Hope you liked it.

pfSense 2.5.0 Development Snapshots Now Available

A news from the netgate blog with week reveal more details about the upcoming pfSense 2.5.0 build.
You can check is in details pfSense 2.5.0 Development Snapshots Now Available

The most interesting part for me is:

AES-NI Not Required
The original plan was to include a RESTCONF API in pfSense 2.5.0, which for security reasons would have required hardware AES-NI or equivalent support. Plans have since changed, and pfSense 2.5.0 does not contain the planned RESTCONF API, thus pfSense 2.5.0 will not require AES-NI.

This will allow broader range of hardware to be able to run this new version.

There is a comprehensive list of new features and changes

2.5.0 New Features and Changes

pfSense One Last Time

I’ve built my first pfSense router almost a decade ago. Fun fact is that the device is still working flawlessly, Alix2d13 is a beast. At that time 100 Mbps was more than enough compared to the offerings of the ISPs. Fast forward ten years, my WAN links are above this speed. CPU and RAM combined with the embedded architecture impose heavy limitations on the plugin options.

Still remember the time when equipped with console cable hooked to the Alix running pfSense v 1.2.2. Holding tight to my hard copy of pfSense The Definitive Guide to the Open Source Firewall and Router Distribution Book. I’ve typed on the terminal. These days are long gone.

Currently running on a Dual Core Intel CPU with 8 GB RAM and SSD, pfSense is v 2.4.4. The book is freely available online. It makes very little sense to me writing tutorials on how to do this and that any more. The only exception are some obscure cases that are rare and sharing them will be mutually beneficial.

In retrospective PfSense has gone a long way for the past 9 years. The project changed hands and lost some of its founding father. These changes steered the project in a direction that is not so much to my liking. To get a taste please review the following quotes:

pfSense 2.5 and AES-NI

pfSense version 2.5 will be based on FreeBSD 12, which should bring route-based IPsec, along with support for our integrated management platform, NRDM (more about this soon), and a number of other features.

With the increasing ubiquity of computing devices permeating all areas of our lives at work and at home, the need for encryption has become more important than ever. Desktops, laptops, smart phones, tablets, and many other devices all share this need to be able to encrypt sensitive information. Without encryption, everything you send over a network (or even store on a local storage device) is in the open, for anyone to read anytime he wants to read or even change it.

While we’re not revealing the extent of our plans, we do want to give early notice that, in order to support the increased cryptographic loads that we see as part of pfSense version 2.5, pfSense Community Edition version 2.5 will include a requirement that the CPU supports AES-NI. On ARM-based systems, the additional load from AES operations will be offloaded to on-die cryptographic accelerators, such as the one found on our SG-1000. ARM v8 CPUs include instructions like AES-NI that can be used to increase performance of the AES algorithm on these platforms.

The AES-NI instruction set extensions are used to optimize encryption and decryption algorithms on select Intel and AMD processors. Intel announced AES-NI in 2008 and released supported CPUs late 2010 with the Westmere architecture. AMD announced and shipped AES-NI support in 2010, starting with Bulldozer.

Please remember these requirements when you are considering components for your pfSense system.

More on AES-NI

There have been some concerns expressed about the requirement for AES-NI (or other offload) with pfSense 2.5, as announced two days ago.

Some complained that, since they don’t use VPN, they don’t need AES-NI. While I wasn’t quite ready to say more about the “3.0” effort, it is the reason for the new requirement for pfSense 2.5 and beyond.

With AES you either design, test, and verify a bitslice software implementation, (giving up a lot of performance in the process), leverage hardware offloads, or leave the resulting system open to several known attacks. We have selected the “leverage hardware offloads” path. The other two options are either unthinkable, or involve a lot of effort for diminishing returns.

So why the requirement?

Future versions of pfSense have a new management model. We’re leveraging YANG, via RESTCONF.

The webGUI will be present either on our cloud service or on-device, both talking to the ‘back-end’ (written in ‘C’) on the device via a RESTCONF interface. This is just as I said back in February 2015.

We’re leveraging AES-GCM inside TLS as the transport layer, because RFC 7525 REQUIRES it, and the RESTCONF standard, RFC 8040, says RFC 7525 is a SHOULD.

AES-GCM in particular has problems with side-channel attacks on pure software implementations. ChaCha20, which nicely avoids these issues when in software, isn’t an option. This is because: a) it’s not RFC-compliant, and b) there are currently no acceleration offloads for it, and the situation is that there could be thousands, or tens of thousands of pfSense instances hitting a single (clustered) instance of our cloud management platform.

So the choice is either to design, engineer and release a less-than-strong product, or require AES-NI or other offloads.

The entire PHP layer is being eliminated in the “3.0” effort, and we’re simply too small to continue to maintain both the current, organically-grown PHP layer (100K lines of PHP in 200 files) and the new, pure JS GUI (client) architected as a single page web application.

So there is an excellent chance that pfSense 2.5 will use the new webGUI, talking to our RESTCONF back-end.

As should be obvious by now, this isn’t about VPN.

It’s Still Free to Use

If you’ve been testing 2.4 snapshots and updates, you’ve already seen a lot of new features. You have probably also seen a new pop-up in the webGUI.

Trademark Policy pop-up
While the pop-up is new, the message isn’t. On January 27, 2017 we posted a blog Announcing a new trademark policy for pfSense. This pop-up is a simple reminder that ensures everyone sees and acknowledges the trademark policy.

For our end users and customers, nothing has changed. pfSense Community Edition (CE) remains a free and open product available for your personal or business use. This is true if you buy hardware from us or not. This notice addresses those who take pfSense CE and sell it, infringing our policy while not giving back to the project and directly competing with Netgate.

At Netgate, we engineer, build, test, and give pfSense software to the community for free. Accepting the pop-up affirms your agreement and right to use, but not sell, pfSense software.

After reading these articles and looking at my old faithful Alix. It is time to look for alternatives. One possibility is OPNsense

These two articles condense a long story, hope you find them informative:
I’ll share just a small quotes from them.


OPNSense is a fork of PFSense, and PFSense is itself a fork of m0n0wall.

The story gets even more interesting:
Building a BSD home router (pt. 6): pfSense vs. OPNsens

Once upon a time… in 2003 there was a new firewall OS called m0n0wall. Manuel Kasper had built it on a stripped down version of FreeBSD. There had been small firewalls before, but Kasper’s innovation was to put a Web GUI on top of it so that the firewall’s settings could be controlled from the browser! It did not take long and m0n0wall took the world by storm. However Kasper’s project focused on embedded hardware. So only a while later a fork was created which geared towards more powerful hardware. The fork’s name? You’ve guessed it: pfSense. In 2015 Manuel Kasper officially ended the m0n0wall project (because recent versions of FreeBSD had been grown too big to be easily usable for what he did with it in the past). And guess what he did: He gave his official blessing and recommends to migrate to and support OPNsense!

So if I want to go with pfSense I need to get a box with CPU supporting these AES-NI instructions. For this either get a branded box or build one on my own. Naturally I’ve chosen the latter.

In selecting the box I have to thank to the great members of the community forum of pfSense.
Alternative to Qotom Q190G4 with AES-NI?

My choice of box is this quiet beast:

QOTOM 4 LAN Mini PC with Core i3-4005U / i5-5250U processor and 4 Gigabit NIC, support AES-NI, Serial, Fanless Mini PC pfSense

Product properties: i5-5250U, 8G RAM, 128G SSD + Q355G4 WIFI $ 350.42

The fan less design combined with the SSD makes the operation completely silent. It keeps a steady 50 C temperature, with average CPU usage of 1-4 % and consumes only 7 – 10 % of the memory. My hopes are that this will serve me for at least the next 10 years. As it is x86 architecture my choice of OS dramatically widens leaving the door open to experiments with whatever comes to my mind.

Multi-WAN with pfSense HTTPs Sites Issue

I’ve been running pfSense in Dual WAN mode for more than a decade. Unfortunately, some sites lately are quite sensitive per user session originating from multiple public IP addresses. The best description of the problem is from the official pfSense documentation:

Some websites store session information including the client IP address, and if a subsequent connection to that site is routed out a different WAN interface using a different public IP address, the website will not function properly. This is becoming more common with banks and other security-minded sites. The suggested means of working around this is to create a failover group and direct traffic destined to these sites to the failover group rather than a load balancing group. Alternately, perform failover for all HTTPS traffic.

The sticky connections feature of pf is intended to resolve this problem, but it has historically been problematic. It is safe to use, and should alleviate this, but there is also a downside to using the sticky option. When using sticky connections, an association is held between the client IP address and a given gateway, it is not based off of the destination. When the sticky connections option is enabled, any given client would not load balance its connections between multiple WANs, but it would be associated with whichever gateway it happened to use for its first connection. Once all of the client states have expired, the client may exit a different WAN for its next connection, resulting in a new gateway pairing.

Problems with Load Balancing

After some testing and consideration let’s leave the sticky connections unchecked. As mentioned above they are problematic.

Other description of the problem here:

Some websites do not work properly if requests from the LAN are initiated from multiple public IP addresses. Hence load balancing is incompatible with these sites. Common examples are sites that maintain login sessions, most frequently online banking. This is most commonly observed with HTTPS sites so usually HTTPS should not be load balanced. Occasionally it is a problem with HTTP sites that maintain session, but this is rare.

For sites that do not function with load balancing, add firewall rules to not load balance traffic to these destinations or protocols.

Web site incompatibility with changing IP addresses

To alleviate this issue, you can do the following:

Here are my two Gateways

Make two Gateway Groups

One for Load Balancing

Set for both Gateways Tier 1

One for Failover

Set Tire1 for the one and Tier 2 for the second

Go to the LAN Rules

Set the default LAN rule to use the Load Balancing Gateway Group.

Add new rule that will be valid only for HTTPS connection and set the Gateway to the Fail-over Gateway Group.

This way all HTTPS connections will pass through the First WAN until it goes down and failover to the Second. The alternative is to make separate rule for each and every HTTPS site with issues. The rule will be very similar to the one for HTTPS. The difference will be that Destination address will be single Public IP. Doing so will load balance all other HTTPS connection that don’t have this problem.

My blog new home

Hello everyone. I’ve been running this blog for 9 years now. In 2010 there were very few options to host WordPress site. The blog was hosted in one local provider up until its acquisition by another company two years ago. This pushed me to move. At that time, I was experimenting with Azure and the natural choice of service was an Azure Web App with WordPress template.

The platform is packed with settings that you can tweak. Unfortunately, the prices are targeted at enterprise customers and were too steep for a small site like mine.
Recently a friend of mine mentioned the Static Site Generators. I’ve investigated the topic and it appeared quite an interesting concept. For my purposes this solution fits well as most of my content is static. Thanks to GDPR I had to close all comments and feedback options, which made the site even more static. On the positive side not having to worry about PHP and MySQL out in the wild sounds very a pleasing.

Static Site Generators Research
After reviewing a couple of solutions, the idea to use WordPress as a Static Site Generators begin to take shape. This could be achieved by employing Plugins. The ones that I’m currently experimenting with are:
Simply Static
Each has some very strong features. But I still haven’t decided which one to keep.
Simply Static has great Diagnostics that helped me troubleshoot some issues. Offering streamlined interface with essential options. While WP2Static gives more control over the crawling and processing with valuable options to clean up some garbage from the output pages.
They both produced archives containing the static content, WP2Static can even be configured to push it directly to GitHub repository.

Web Hosting
Either way I have a static version of my site now. The next hurdle is where to store the files?
I don’t want to host it On-Premise as this will require a lot of additional time and effort. As the Cloud options are so many, let’s narrow it down to Static Websites on Azure Storage and GitHub for simplicity sake.
Ever since I’ve read about the Static Websites in Azure, I really wanted to try it with something. Now having the content experiments can commence. My inspiration came from this article Static websites on Azure Storage now generally available
Price for storing content and data transfers charges would be low in my case. SSL is requirement but it is not directly offered from Static Websites. I need Azure CDN in front to get the desired feature. This will incur additional cost and complexity for just one simple static site served over HTTPs.
For Web Apps there is a Azure Let’s Encrypt Extension that is a pretty useful solution to automatically renew the SSL Certificate.

This tilted the table in favour of GitHub. There are no taxes for storing publicly accessible content and it automatically takes care of the SSL Certificate provisioning and renewal. For the moment GitHub wins.

WordPress Hosting
Now that I have the generation and web hosting covered. The last obstacle is where to keep my Static Content Generating WordPress instance?

Cloud: – good starting point. But pretty much the same. You get a free WordPress instance somewhere out there. Not my ideal solution.

Azure Containers Instance – recently I have experiment with this service. It is perfectly suited to quickly get up and running a bunch of containers. Get the job done and destroy them. There is a ready-made template to build two containers solution for WordPress here: Create a WordPress site on a Container Instance
My experience with the service is mostly positive. Bringing it up is easy. Getting data persistence requires modifying the template. The performance at the lowest tire was comparable to the once in Web App and insufficient from my perspective.

VM with docker containers – My first choice. Spun up Ubuntu server, quickly and easily installed docker and docker compose. As I felt generous at the time the VM had 4 CPU cores and 4 GB or RAM. Even on mechanical HDD, performance was Fantastic! I never knew that WordPress could feel so snappy! The downside is that I will not keep the VM running all the time and must start and stop it every time that I want to do something.

RPI with docker containers – As I have a Raspberry Pi humming in the background. This is a preferable alternative to a full blown VM. I’ve managed to install docker and docker compose relatively easy using as a starting point this article: The easy way to set up Docker on a Raspberry Pi and HOW TO RUN DOCKER AND DOCKER-COMPOSE ON RASPBIAN

To my unpleasant surprise there is NO official MySQL image for ARM, Really!? Check this article: Install mysql server-5.7 on ARMv8 architecture (raspberry pi 3 model B)
But there is one for WordPress. I’ve tried building one on my own and hit a lot of problems! As a temporary solution for the moment I’m using this image hypriot/rpi-mysql. Works fine but holds MySQL 5.5 and my aim is 5.7. In near future I’ll look into it and if possible, build one image on my own.
Had a few hiccups with the persistent storage of data for both WordPress and MySQL containers, but got it right at the end.
Here is the content of my docker-compose file:

version: '3.7'

     image: hypriot/rpi-mysql:latest
       - ./db_data:/var/lib/mysql
     restart: always

       - db
     image: wordpress:latest
      - ./wp-content:/var/www/html:rw
       - "80:80"
     restart: always
       WORDPRESS_DB_HOST: db:3306

    db_data: {}
    wp-content: {}

Again, the performance is even more surprisingly good compared the cloud scenarios. Considering that it is run off a mechanical hard drive on a very humble 1.2GHz Quad and only 1 GB of RAM.

With this exercise I’ve managed to decrease the costs of hosting this blog to the bare minimum. Hosting costs are reduced to virtually free. The power costs for the Raspberry Pi are shared with other services and are negligible.
Compared with Azure Web App and it’s 20 Euros per month it is a feat!
My initial intention for this post was to be short and sweet, but it grew into something much larger. I was able to compress a few weeks of research and experiments into these few lines omitting a lot of problems crossed along the way. Hope you enjoy reading it.

Azure Architecture Useful Resources

I’ve collected a few useful resources and would like to share them with you. If you are into Azure Architecture these most probably will be quite useful.

Azure Architecture Center – Start Here

Azure Application Architecture Guide – This guide presents a structured approach for designing applications on Azure that are scalable, resilient, and highly available. It is based on proven practices that we have learned from customer engagements.

Azure Reference Architectures – Our reference architectures are arranged by scenario. Each architecture includes recommended practices, along with considerations for scalability, availability, manageability, and security. Most also include a deployable solution or reference implementation.

Azure Quickstart Templates – Deploy Azure resources through the Azure Resource Manager with community contributed templates to get more done. Deploy, learn, fork and contribute back.

Cloud Design Patterns – These design patterns are useful for building reliable, scalable, secure applications in the cloud.
Each pattern describes the problem that the pattern addresses, considerations for applying the pattern, and an example based on Microsoft Azure. Most of the patterns include code samples or snippets that show how to implement the pattern on Azure. However, most of the patterns are relevant to any distributed system, whether hosted on Azure or on other cloud platforms.

Microsoft patterns & practices – We discover, collect, and encourage practices that bring joy to engineering software.

Introducing the Azure portal “how to” video series – A new video weekly series highlights specific aspects of the Azure portal so you can be more efficient and productive while deploying your cloud workloads from the portal.

Azure Resource Explorer – Azure Resource Explorer is a new web site where you can easily discover the Azure Resource Management APIs, get API documentation, make actual API calls directly in your own subscriptions

Microsoft Cloud Workshops: Free Microsoft Azure Hands-on Lab Guides – The Microsoft Cloud Workshop (MCW) program maintains a number of Workshops that are used to train Microsoft’s own Cloud Solution Architects, as well as Microsoft Partners all over the world, how to use Microsoft Azure services. Currently there are a total of 36 Microsoft Cloud Workshops that cover a wide range of enterprise scenarios within Azure. These include Interest of Things (IoT), Blockchain, Cost Optimization, Cloud Migration, Microservices, Serverless, and much more!

Microsoft Certified: Azure Solutions Architect Expert

I would like to share with you my learning path achieving this certification. In order to achieve Microsoft Certified: Azure Solutions Architect Expert, you have to pass two exams: AZ-300 and AZ-301.

AZ-300 Microsoft Azure Architect Technologies
A great way to start is with Scott Duffy’s course on Udemy: AZ-300 Azure Architecture Technologies Certification Exam 

Now is time to dig deeper in to each topic. For the purpose you can either go through this list: Updated: Study resources for the AZ-300 Microsoft Azure Architect Technologies exam or this one Azure Architect AZ-300 Exam. The latter is much more detailed.

AZ-301 Microsoft Azure Architect Design
Soon there will be a video course from Scott Duffy for this course too. Until than you can start with this list: Updated: Study resources for the AZ-301 Microsoft Azure Architect Design exam.

If you have PluralSight subscription there is a large collection of courses Here.

Before going to exam you absolutely have to do practice! Working with the real production technology has no substitutes. So hit the labs. My favorites are on the new Microsoft Learn platform. Check this list of learning paths specially tailored for Azure Architects. I’ve done them all and could say the are well written and the hands on labs are pretty good.
Microsoft Learn – Solution Architect

As alternative to Learn you can check these two locations for other Labs:
Azure Citadel
Microsoft Hands-On Labs