Quantcast
Channel: Scrum Bug
Viewing all 216 articles
Browse latest View live

Scaling Scrum to the limit

$
0
0
Scaling Scrum to the limit

You’re likely to have been asked the question: “we need to go faster, how many more people do we need?” Most people naturally understand that just adding a random number of people isn’t likely to make us any faster in the short run. So how do you scale Scrum to the limit? And what are those limits?

Meet Peter, he’s a product owner of a new team starting on the greatest invention since sliced bread. It’s going to be huge. It’s going to be the best. Peter has started on this new product with a small team, six of his best friends and it has really taken off. In order to meet demands while adding new features, Peter needs to either get more value out of his teams and if that is no longer possible, add more team members.

He and his teams have worked a number of sprints to get better at Scrum, implemented Continuous Integration even to deliver to production multiple times per day. It is amazing what you can do with a dedicated team willing to improve. But since their product was featured in the Google Play Store they’ve found themselves stretched to their limits. Peter has found himself in the classical situation in which many product owners and project managers find themselves. How do you replicate the capabilities of your existing team without destroying current high-performance teams? He contacts a good friend, Anna, who has dealt with this situation before and asks for her advice.

Anna explains that there are two options of gradual growth that have a very high chance of succeeding with limited risk to his productive team.

Paweł Olchowik summarized this post in a short movie.

1. Grow and split model

In this model, new team members are added to the existing team, one team member at a time and taking enough time to let the new member settle before adding the next. Once the team reaches a critical point, a natural split is bound to happen and you’ll end up with two or more smaller teams. Peter remains the sole Product Owner (a product is always owned by a single owner in Scrum), but as they grow they may add an additional scrum master to help facilitate and help the teams to keep improving.

Scaling Scrum to the limit
Grow-and-split model

Most often the split happens naturally when a team grows beyond a certain size. This allows the team to self-manage their new composition. However, the new teams may never perform as well as the original team.

2. The apprentice model

The second model, Anna explains, uses an age-old model to train new people on the job. In the apprentice model the existing team takes on two apprentices who are trained in the ways of working and the functional domain. After a couple of sprints these apprentices reach their journeyman status and start a team of their own.

The biggest advantage of this model is that the original team stays together. They do have to onboard and teach the apprentices, which is likely to impact the way they work together, but this model has a much higher chance of retaining the productivity of the original team.

It may take a few sprints for the new team to reach the same level of productivity as the original team had, but you’ll have a higher chance of keeping your first team stable and productive. Unfortunately, in this model there are no guarantees either. Adding new people to an existing team can have lasting effects, even after these people leave.

These effects can be both positive and negative. E.g. they may bring along a new testing technique that helps everyone become more productive, or they may involve new insights that cause division among the original team. Depending on the team, they may be able to benefit from both, but it may also tear them apart.

It’s always the case that the original team will now have to learn to cooperate with the newly formed team, which will likely have a massive impact on their productivity.

Scaling Scrum to the limit
Apprentice model

Knowing when to stop scaling

Peter asks his team which model they feel most comfortable with and the team decides to start with the grow-and-split model, and after they’ve split off into two teams, adopt the apprentice model to grow further if needed.

He also asks Anna to join his company. She takes up the mantle of Scrum Master and focuses on helping the teams improve and helps them discover solutions for many of the problems introduced by working with multiple teams.

Meanwhile, Peter keeps asking the teams to train new team members and steadily the number of teams grows.

One afternoon Anna comes knocking on Peter’s door and shows him a couple of statistics she has kept for as long as she has been working in the company. According to her statistics, she had been tracking the value delivered per sprint as well as each team’s velocity - the latest additions haven’t been able to really deliver more. She argues that the overhead of working together with so many teams has reached the maximum sustainable by the current architecture. She asked the teams and found out that people are tripping over each other’s work, integration regularly fails, and people are spending too much time in meetings and not enough time on “real work”. Despite the practices she has introduced, such as cross team refinement and visualization of dependencies, it seems that they have reached the maximum size for the product.

While Peter is a bit disappointed, he has to admit that Anna warned him that he couldn’t just keep adding people and expect an ever-increasing amount of work to be delivered.

Scaling Scrum to the limit
The sky may not be the first limit

Useful metrics while scaling

While velocity (story points delivered), hours spent and number of tests passed are all viable ways of tracking progress for a development team, it’s easy to measure the speed at which worthless junk is being delivered to production.

This is why Anna also kept track of other metrics, such as value delivered, customer satisfaction (through app store reviews), incidents in production (through the monitoring tools they have in place) and more.[1]

Keeping statistics about the amount of value delivered while you’re scaling is important. You will probably find that while the total number of teams increases, each new team adds less and less value. This is a sort of glass ceiling that you may hit sooner or later. Breaking through it may require drastic changes to the application’s architecture or to the way the teams work together.

As Peter and the original team never expected the product to take off this fast, the architecture of the application was put together a bit haphazardly. And under the pressure to deliver, they cut a few corners left and right. He calls all of his teams into the company canteen and explains his predicament. Each team selects one or two of their most experienced team members and they form a temporary team of experts to figure out how to break up their little architectural monster. After peeling off a few functional areas and refactoring them into smaller, individually deployable parts of a cohesive functional unit, it quickly becomes apparent that this new architecture prevents them from tripping over each other’s toes.

You may have heard of this model before: small functional cohesive units of code that maintain their own data and that are called Microservices. These small units are ideal to form teams around and give these teams a lot of freedom.

Scaling Scrum to the limit
Drastic changes enable new growth

Could we have done it differently?

Sometime later Peter finds Anna in the company coffee corner and asks her whether they could have taken another approach, one that would have shown the issues in their original architecture at an earlier stage of their product’s development. He also wonders whether they could have scaled faster by hiring experienced teams.

Anna explains that there was a third option she never explained to Peter, because it carried a much higher risk, and she didn’t dare risking the product. The third option was to quickly add a number of teams all at once, preferably teams of people who had already had some experience working together and that had experience working at such scale. At the same time, the original team members would be scattered among the newly formed teams, optionally rotating to share their specific knowledge of the domain or processes, and to explain the architecture and infrastructure In this model, given that you hit the problems in the established processes and in the architecture head-on, everyone needs to work together to quickly find solutions to all of the problems they encounter. If they manage this, they may be able to quickly find a way to work together. They may, however, also completely come to a stand-still or the amount of conflict may reach levels unimagined before.

3. Scatter and rotate model

Scaling Scrum to the limit
Scatter and rotate model

To ensure the new teams have equal access to the knowledge and skills of the original team, the original team members often rotate among the newly formed teams or they are not dedicated to any team for a few sprints, before everything settles down.

If this model had succeeded, they may have been able to scale much faster. However, they could also have been out of business.

Peter reflects that had they had a direct competitor in the market who was able to deliver much faster, they could have taken this risk. But it would have been an all-in gamble. He's glad they weren’t in that situation.

Conclusion

There isn’t really a hard limit in terms of how many people can contribute to an agile product or organization. But clearly there are limits to the pace at which you can grow, to the amount of control you can have over what is going on in every team, and to what the product’s or organization’s architecture and processes can sustain.

There are multiple models to grow your ability to deliver value. While adding teams may seem the easiest solution, investing in continuous integration, automated deployments, and a flexible architecture may deliver more sustainable value faster.

When you do need to scale beyond what’s possible with a single team, remember that if you’re not ready for it, you’ll exponentially scale your team’s dysfunctions.

To be very blunt, if you scale shit, you end up with heaps of it. When you’re able to deliver quickly, efficiently and professionally, you can scale your teams. Keep measuring while you scale and keep evaluating your way of working, collaboration and architecture. Using your statistics, you can make an informed decision whether to scale further. Without them, you may be degrading your ability to deliver value without ever knowing it. Keep inspecting your processes, tools, architecture and team composition regularly. Your team will probably know what to improve in order to deliver more of the right things more efficiently.

Would you like to know more? Join a Scaled Professional Scrum class to experience Scrum with Nexus hands-on.


Workspace management tips for TFVC

$
0
0
Workspace management tips for TFVC

Workspaces are maybe the least understood feature in TFVC (Team Foundation Version Control). They're a great way to isolate different sets of files and changes from TFVC repositories.

A lot of people configure a new workspace for a specific project or set of solutions, but let's look at some of the ways workspaces can be used in detail:

  • Hotfixes: you may need to create a hotfix for something happening now, but you have pending changes in your existing workspace. Instead of shelving these changes, performing a "Get Specific version" on the bugged version, you can also create a new workspace in which to solve this particular problem. After completing the fix you can then continue working with the other workspace without needing to do anything.
  • Experiments: you may want to do some major refactoring, restructure source control or some other highly impactful operation. Doing this in a new (temporary) workspace helps you prevent messing up your normal work area.
  • Reviewing other peoples changes: When performing a review on another person's changes, you may want to have a local copy so you can run, annotate and play with the other person's code. Instead of taking these changes into your own workspace, you can easily bring these into a temporary workspace, which you can safely delete afterwards.
  • Performing a merge, while you are working on other changes: It may be the case that you're working on a new feature an already have some changes merged back to another branch when a release needs to be shipped. In order to prepare this release, without picking up changes or overwriting work in progress in your current workspace, it's often easier to perform these kinds of release activities in a temporary workspace, that way you know that the work is always done on the exact version in source control.
  • Preventing accidental changes to important branches: By putting your production branch in a separate workspace, you can't accidentally combine changes from say Development and Main into a single check-in. Since Visual Studio often auto-selects all pending changes in the workspace, this may cause unintended changes to your master/main branch. I've written a Check-in policy to prevent these issues, but having separate workspaces is a much safer solution.
  • Working with multiple developers on the same workstation/server: in some organisations, developers use a remote desktop to a central beefy server to do changes. To ensure each developer has his own set of files, each developer gets his/her own workspace. An alternative is to make the workspace public, which allows multiple developers to use the same workspace folder. But this often leads to all kinds of unexpected issues.
  • Browsing an old version of the code: if you need to review/compare an older version to a new one, you can often get away with the folder diff view in Visual Studio, but if you need to do more thorough comparisons, you may want to have 2 copies of the same folder in your TFVC repo. Creating two workspaces will allow you to have two different versions of the same folder on your local disk.
  • Prepare a special version for merges or labels: You can merge and label the workspace version of a set of files. You can create a workspace and then use Get Specific Version to fetch specific versions of specific files, these can all come from different changeset versions. Once you're satisfied, you can perform the label or merge or branch action to store this specific workspace version configuration on the server.

As you can see, Workspaces allow you to do parallel development on one machine, isolate changes etc.

Be creative

As you can see, workspaces are a very powerful concept. Usable for a lot of operations. But you need to understand the concept thoroughly. Many developers don't understand exactly what workspaces are and how they work, they're missing out of some of the most powerful concepts of TFVC.

Consolidating and cleaning up

When you have multiple workspaces and want to merge them into one, requires you to you can unmap the folders from your _1 folder and then map these same folders in your original workspace. You can also delete the _1 workspace from the TFS Server and then update the mappings of the original workspace.

Remember that workspaces are stored on your local machine, but that the TFS server also has a registry of who mapped which TFVC folders to which workstations. So simply deleting files from your local disk is not sufficient. You need to save these changes to the TFS server (this happens automatically after performing a get operation after changing the mappings).

To check which workspaces are registered to your workstation on the TFS server, use:

tf vc workspaces /computer:YOURWORKSTATIONNAME

Then delete old workspaces with

// DELETE the local workspace
tf vc workspace /delete:WORKSPACENAME

// DELETE the workspace registration on the TFS server
tf vc workspaces /remove:WORKSPACENAME
This first appeared in an answer to a StackOverflow question.

Setting default repository permissions on your Azure DevOps Organization

$
0
0
Setting default repository permissions on your Azure DevOps Organization

In Azure Repos there are a lot of places where you can set security:

  • At the Branch level (develop, master)
  • At the Branches level (default for all branches)
  • At the Tag level
  • At the Tags level (default for all tags)
  • At the Repository level (PartsUnlimited-GDBC)
  • At the Git Repositories level (for all repositories in a project)
Setting default repository permissions on your Azure DevOps Organization
Permissions can be managed at each level in the tree. Many permissions will cascade down.

But there is no UI to set the security at the Organization level. This is fine if you're happy with the default security settings in Azure DevOps, but if you want certain settings to apply to all projects (also newly created projects), then it's sometimes useful to set the permissions at the Organization level.

For the Global DevOps Bootcamp we have a few challenges that require changes to be committed to Git through an automated process in order to cause a disruption.

To ensure the changes are able to bypass any branch policies and protected branches, we needed to make sure the service account that makes the change is able to bypass policies.

If you've dug into the security innards of Azure DevOps in the past, you'll have found out that certain permissions are granted to persons or groups and are linked to a token. This token is usually built up out of a root object and a bunch of GUIDs. For example, this is the token for a specific Git Repository:

repoV2/daec401a-49b6-4758-adb5-3f65fd3264e3/f59f38e0-e8c4-45d5-8dee-0d20e7ada1b7
^      ^                                    ^
|      |                                    |
|      |                                    -- The Git Repository
|      -- The Team Project Guid
|
-- The root object (Repositories)

Simplest way I know of to find these details, is to capture the web request made when a permission is changed:

Setting default repository permissions on your Azure DevOps Organization
You can use the Web Developer tools in your favorite browser to find the token you need.

Once you understand this, it's easy to find the token for the "All Repositories in a Team Project" token. Just take off the Git Repository GUID at the end:

repoV2/daec401a-49b6-4758-adb5-3f65fd3264e3b7/
^      ^                                    
|      |                                    
|      -- The Team Project Guid
|
-- The root object (Repositories)

And, using the same reasoning, to get to the token for "All repositories in the Project Collection/Organization" token. Just take off the Team Project GUID at the end:

repoV2/
^                                          
|
-- The root object (Repositories)

And now that we have this token, we can use tfssecurity to set Organization level git permissions:

tfssecurity /a+ "Git Repositories" repoV2/ "PullRequestBypassPolicy" adm: ALLOW /collection:https://dev.azure.com/org
            ^   ^                  ^       ^                         ^    ^
            |   |                  |       |                         |    -- Allow or Deny the permission 
            |   |                  |       |                         -- The Group (in this case "Project Collection Administrators")
            |   |                  |       -- The Permission we want to set
            |   |                  -- The Token we found above
            |   -- The Secuity Namespace
            -- Add  (a+) or Remove (a-) this permission

And, as you can see below, this trick actually works :).

Setting default repository permissions on your Azure DevOps Organization
Before: Bypass policies are not set at the Team Project level.
Setting default repository permissions on your Azure DevOps Organization
After: Bypass policies are inherited from the Organization level.
This was very useful for the Global DevOps Bootcamp. Instead of having to customize the permissions for 3000 Team Projects, we could now simply set this permission in the 7 organizations that were setup for the event.
$orgs = @("gdbc2019-westeurope", "gdbc2019-westeurope2", "gdbc2019-india", "gdbc2019-centralus", "gdbc2019-australia", "gdbc2019-southamerica", "gdbc2019-canada")

$orgs | %{ 
    $org = $_
    & tfssecurity /a+ "Git Repositories" repoV2/ "PullRequestBypassPolicy" adm: ALLOW /collection:https://dev.azure.com/$org
    & tfssecurity /a+ "Git Repositories" repoV2/ "PolicyExempt" adm: ALLOW /collection:https://dev.azure.com/$org
} 
Set default org level permissions.

Note: You can use the REST API to manage security as well, but it requires a little more work to look up the correct identifiers for the Group or User, the Namespace Identifier and more. While generally more complete, the REST API is even harder to understand.

More information:

Photo credit: Stephen Edmonds.

All about remote pair programming and mobbing

$
0
0
All about remote pair programming and mobbing

Lisette Sutherland is a long-time remote worker with lots of experience and host of the Collaboration Superpowers Podcast. From the US, working from The Netherlands she is living the remote life. Lisette and I met (in person) at the nlscrum Meetup in Amsterdam, where she presented and where we talked about remote pairing and mobbing afterwards. In this episode of the Collaboration Superpowers Podcast, we talk about the benefits and challenges of pairing and mobbing remotely and highlights the importance of having the right attitude and tools to make it work well.

My tips for working remotely:

  • When you work on something as a “whole”, everyone contributes and everyone is present so we no longer depend on just one person. If there’s an issue with a certain topic, anyone on the team can step up and resolve it.
  • Have a conversation about what you’re trying to achieve rather than just critiquing the thing you’re being presented with.
  • Having a great infrastructure setup, i.e. excellent audio–headphones, quiet background, great microphone–and great video in order to pick up on people’s facial cues, is essential.
  • Pairing is a learning opportunity. You work with someone to improve HOW you do things, e.g. keyboard shortcuts, different tools, websites, etc…and not just for the end result,
  • Have a standard of how your team does things in order to avoid conflict.

More resources

Mob programming

Remote pairing

Live Share:

Configuring standard policies for all repositories in Azure Repos

$
0
0
A couple of weeks ago I blogged about setting collection level permissions on Azure Repos. That sparked questions whether the same was possible on Branch Policies in the comments, twitter and the Azure DevOps Club slack channel.
Configuring standard policies for all repositories in Azure Repos

By default you can only configure policies on specific branches in Azure Repos. You access the policies through the Branch's [...] menu and set the policy from there. But if you're using a strict naming pattern for your branches (e.g. when using Release Flow or GitHub Flow), you may want to set a policy for all future Release Branches, or all Feature branches.

It would be nice if you could write these policies into law, that way you don't have to set them for every future branch.

Let's start with the bad news: the policy API is specific to a Project. Because of that you can't set the policies for all Git Repositories in an account, but you can specify the policy for all repositories in a Project.
Configuring standard policies for all repositories in Azure Repos
Set a policy on a branch.

If you look at the request that's generated when saving a Branch Policy, you can see the UI sending a POST request to the /{Project Guid}/api/policy/Configurations REST API when creating a new policy. That request contains the scope for each policy:

Configuring standard policies for all repositories in Azure Repos
Each policy has a scope in Azure Repos

As you can see, the policy has a Scope. You can have multiple active policies and each can have its own scope. The UI will always create a specific scope that contains the repositoryId and the exact branch name.

"scope": [
    {
        "refName": "refs/heads/master",
        "matchKind": "Exact",
        "repositoryId": "7317f685-3e85-41d6-8e20-10d2319262a7"
    }
]
Scope: (default) Specific Git Repo and single branch.

But if you look at the docs for this API, you'll find that this is not the only option available. The widest scope you can create has no repository scope at all and applies to all repositories in that project:

"scope": [
    {
        "repositoryId": null
    }
]
Scope: All Git Repos in the project.

But there are other cool options as well. You can configure a policy for all branches with a specific prefix by setting the matchKind from exact to prefix.

"settings": {
    "scope": [
      {
        "repositoryId": null,
        "refName": "refs/heads/features/",
        "matchKind": "prefix"
      }
    ]
  }
Scope: All feature branches for all repositories in the project.

Unfortunately, it looks like this API exists at the Project level only. One can't set the policy for all future projects. But, think about it, that makes sense. You can't predict all the future group names, Build Definition IDs and such for projects that don't exist yet. But it's less restricted than the UI would let you believe.

To figure out how each of the policies is specified,  configure one branch the way you want ant then open /{Project Guid}/_apis/policy/Configurations/ on your account. you'll be treated with the JSON for your current configuration:

{
    "count": 1,
    "value": [
        {
            "isEnabled": true,
            "isBlocking": true,
            "settings": {
                "useSquashMerge": false,
                "scope": [
                    {
                        "refName": "refs/heads/master",
                        "matchKind": "Exact",
                        "repositoryId": "7317f685-3e85-41d6-8e20-10d2319262a7"
                    }
                ]
            }
        }
    ]
}

Find out all you need to know about policy types by querying them from your account as well, my account returns these:

[
    {
        "description": "GitRepositorySettingsPolicyName",
        "id": "0517f88d-4ec5-4343-9d26-9930ebd53069",
        "displayName": "GitRepositorySettingsPolicyName"
    },
    {
        "description": "This policy will reject pushes to a repository for paths which exceed the specified length.",
        "id": "001a79cf-fda1-4c4e-9e7c-bac40ee5ead8",
        "displayName": "Path Length restriction"
    },
    {
        "description": "This policy will reject pushes to a repository for names which aren't valid on all supported client OSes.",
        "id": "db2b9b4c-180d-4529-9701-01541d19f36b",
        "displayName": "Reserved names restriction"
    },
    {
        "description": "This policy ensures that pull requests use a consistent merge strategy.",
        "id": "fa4e907d-c16b-4a4c-9dfa-4916e5d171ab",
        "displayName": "Require a merge strategy"
    },
    {
        "description": "Check if the pull request has any active comments",
        "id": "c6a1889d-b943-4856-b76f-9e46bb6b0df2",
        "displayName": "Comment requirements"
    },
    {
        "description": "This policy will require a successfull status to be posted before updating protected refs.",
        "id": "cbdc66da-9728-4af8-aada-9a5a32e4a226",
        "displayName": "Status"
    },
    {
        "description": "Git repository settings",
        "id": "7ed39669-655c-494e-b4a0-a08b4da0fcce",
        "displayName": "Git repository settings"
    },
    {
        "description": "This policy will require a successful build has been performed before updating protected refs.",
        "id": "0609b952-1397-4640-95ec-e00a01b2c241",
        "displayName": "Build"
    },
    {
        "description": "This policy will reject pushes to a repository for files which exceed the specified size.",
        "id": "2e26e725-8201-4edd-8bf5-978563c34a80",
        "displayName": "File size restriction"
    },
    {
        "description": "This policy will ensure that required reviewers are added for modified files matching specified patterns.",
        "id": "fd2167ab-b0be-447a-8ec8-39368250530e",
        "displayName": "Required reviewers"
    },
    {
        "description": "This policy will ensure that a minimum number of reviewers have approved a pull request before completion.",
        "id": "fa4e907d-c16b-4a4c-9dfa-4906e5d171dd",
        "displayName": "Minimum number of reviewers"
    },
    {
        "description": "This policy encourages developers to link commits to work items.",
        "id": "40e92b44-2fe1-4dd6-b3d8-74a9c21d0c6e",
        "displayName": "Work item linking"
    }
]
All policy types available in my account.

The configuration for each policy is a bit of a mystery. I tend to configure a policy through the UI, then retrieve the configured policy to see what the JSON looks like.

Now that you understand the underlying concepts, guids and things, you can use the raw REST requests from PowerShell or... You could use the new Azure CLI for Azure DevOps:

az extension add --name "azure-devops"
az login

az repos policy create --org {your org} --project {your project name or guid} --config "path/to/config/file"

For reference:

Solving TFVC error TF14067 and Azure Pipelines

$
0
0
Solving TFVC error TF14067 and Azure Pipelines

If you depend on the TFVC Client Object Model or tf.exe in your Azure Pipelines, then this is likely being caused by the agent depending on a different version of the TFS Client Object Model than the one you are using.

I recently ran into this issue again when Microsoft upgraded tf.exe that ships with the Azure Pipelines agent and my TFVC Pipeline Tasks started failing everywhere. repopulating the workspace cache is the simple fix for this issue. And I'm not alone, like this issue on StackOverflow.

When this happens you'll see the following cryptic error message that mentions:

  1. A path you are absolutely certain exists
  2. A workspace name that was created minutes before when the build agent initialized
  3. TF14067
TF14067: The item {path} could not be found in the ws_{id};Project Collection Build Service workspace, or you do not have permission to access it.

Solution

Option 1: Migrate to Git

Seriously. TFVC has been on life-support for a long time, has not been receiving new feature love in a long time and doesn't match the often fast-paced and distributed nature of today's organisations.

Azure DevOps aed even Team Foundation Server before it have had a simple Import feature to convert a stable TFVC branch into a fresh Git Repository. It may then take some work to clean up the repo to adhere to the latest clean repo standards, but technically the change is pretty straightforward.

Solving TFVC error TF14067 and Azure Pipelines
Use Import repository to convert from TFVC to Git.
Solving TFVC error TF14067 and Azure Pipelines
Pick the branch and pull it in.

Option 2: Use my TFVC Pipelines Tasks

To help a client migrate from XAML to, back then, Visual Studio Online, I built the TFVC Pipeline tasks. A small set of simple tasks that allow you to run a couple of common TFVC scenarios as part of your build pipeline.

- task: tf-vc-checkin@2
  displayName: 'Check changes into source control'
  inputs:
    ConfirmUnderstand: true
    BypassGatedCheckin: true
    OverridePolicy: true
    OverridePolicyReason: 'Override!'
    Recursion: Full

I've recently released version 2, which adds improved and cleaned-up YAML support and uses the latest agent features so it can last a little while longer.

It's an easy way to buy a little more time while training your teams and prepare the migration to Git.

Option 3: Install and use the correct tf.exe.

Each version of the agent ships with a copy of the TFS Client Object Model. TFVC relies on a local workspace cache which must be populated for each version of the TFS Client Object Model. The agent only populates the cache for its own use.

By using a version of tf.exe that was built with the same major version number of the TFS Client Object Model, it can piggy-back on the cache that was populated by the agent.

Option 4: Force workspace cache population

You can force the population of the workspace cache on a different major version from the command-line:

> tf vc workspaces /collection:$(System.TeamFoundationCollectionUri) /computer:$(Agent.MachineName)

Or from code:

$versionControlServer = $tfsTeamProjectCollection.GetService([Microsoft.TeamFoundation.VersionControl.Client.VersionControlServer])
$workstation = [Microsoft.TeamFoundation.VersionControl.Client.Workstation]::Current
$workstation.EnsureUpdateWorkspaceInfoCache($versionControlServer, $versionControlServer.AuthorizedUser)

Final thoughts

When I wrote the TFVC tasks back in 2015, it was to help a client make the transition from their TFS 2012 XAML build to the, back then, new build system. This allowed them to go all-in on, back then Visual Studio Online and migrate to Git. We're now almost in 2020. which raises the question...

Why are people still heavily dependent on TFVC and only now migrating away from XAML builds? The new build agent introduced in 2015 build agent has already had 2 major versions, XAML has officially been deprecated, Visual Studio Online has been renamed to Visual Studio Team Services and then to Azure DevOps. We've reached the point that many folks are migrating from UI based build definitions to YAML.

It's time to move on!

Launch WSL bash prompt from Tower

$
0
0
Launch WSL bash prompt from Tower

When you launch a terminal from Tower, it launches an included MingW bash shell. Now that Windows 10 ships with the Windows Subsystem for Linux it would be nice to use that bash shell instead.

My first attempts at launching WSL failed miserably. It looks like Windows Filesystem Redirection causes to not see wsl.exe or bash.exe because Tower is a 32-bit program.

After a number of tries, I found that you can use the sysnative filesystem redirection point to escape the 32 bit world:

Title: Bash (WSL)
Path: C:\WINDOWS\sysnative\wsl.exe
Arguments:

Launch WSL bash prompt from Tower
Launch Windows Subsystem for Linux from Tower
Launch WSL bash prompt from Tower
Launching the Windows Subsystem for Linux bash prompt from Tower

Agile is dead?! Long live Agile

$
0
0
See the slides in all their animated glory
Agile is dead?! Long live Agile

Last week I presented at Techorama 2019 in Ede, The Netherlands. A topic near to my heart, a reply to so many of the people saying that Agile is dead. Or Scrum or that SRE is the new DevOps.

What many of these people have in common, is that they never got to experience what it means to work closely together in a cross functional team continuously improving everything around you.

My call to you all is to really invest in your retrospective and to really do something about the things you find.To improve at least every month, but nothing is preventing you from sitting together daily to make small improvements mid sprint.


Focus on what was "Done" during Sprint Review

$
0
0
Focus on what was

The Sprint Review is generally the hardest event to get right. It's where the Scrum Team and their stakeholders meet, thus the event with the largest number of participants. It's also the event in which different people come with a different purpose.

  • The Development Team is present to show their work and receive feedback. They're hopefully also there to connect to their stakeholders.
  • The Product Owner is also there for the feedback, but also to verify the longer term road-map and the state of the product backlog.
  • The Stakeholders often come to see the things they requested. We also ask them to share insights they have gathered during the sprint and to collaborate on the Product Backlog.

One thing I've observed is that the focus is often on What was planned and What is now "Done". I feel this is an anti-pattern. While progress is important, I feel the focus should be on What is "Done" and What to do next.

Focus on what was
Looking at What was planned vs "Done" focuses on the Sprint Backlog

What's so bad about: What was planned and What is now "Done"?

A fair question. Is it important what was planned? You could argue it is. Yet, there is nothing to be done about it now. While it was planned, it wasn't "Done" and can't be delivered. While looking into this may provide ways for the Scrum Team to improve, it's not a topic for the Review, but the Retrospective. The purpose of the Review is to look forward, not to look back.

  1. Focuses on the Sprint Backlog, the short-term plan.
  2. Focuses on the activities done by the Development Team, instead of the value delivered to the customer.
  3. Reinforces the belief that our work is infinitely predictable.
  4. Ignores the primary purpose of the Sprint Review, reviewing what to do next.

Instead of focusing specifically on what wasn't "Done", let us instead look what that means for the future.

  • Looking at the increment and the product backlog, what adaptions do we need to make?
  • Are there any obvious impediments the Stakeholders could take away?
  • Are there new insights that could change our opinions?

These questions allow us to take a step forward.

Alternative: What is now "Done" and what to do next?

Instead of presenting a list of what was planned and a list of what was delivered, can we achieve the same result, but focus on the future? Two ways I've found to work well are:

Let's look at the backlog now and a sprint ago

In this option the team shows the backlog as it was at the beginning of the sprint and what it looks like right now.

This setup has a couple of advantages:

  1. The focus is on the Product Backlog and on what to do next.
  2. The Product Owner will be able to talk about items that were added and removed at the top of the product backlog.
  3. The Scrum Team can focus simply on the work that was done when showing the increment.
Focus on what was
Comparing the backlog now vs last time we reviewed it.

By contrasting the two backlogs, the focus is changed from planned vs delivered to the differences and the future.

The Bill-Backlog-Hot-100

In this option the Product Owner shows the backlog as if they were presenting the Billboard-Top-100. Highlighting the top, the work that was delivered ant will likely be delivered next. The fastest climbers, the items that have increased in potential or priority. The items that have sunk to the bottom. And the starred items, items that need to be highlighted or warrant additional discussion.

This setup also has a couple of advantages:

  1. It highlights the work "Done" at the top of the list.
  2. It allows the Product Owner to bring specific items to the attention.
Focus on what was
The backlog-hot-100.

By not showing what was planned, but highlighting the important changes to the product backlog, the focus is changed from planned vs delivered to the changes and their impact on the future.

The impact on not meeting the plan

"What it the team hasn't delivered what the customer needed?" you might ask. Well, that's an interesting question...

  • Did the need appear after the Scrum Team planned their sprint?
  • Did the Scrum Team forecast the work, but not deliver it?
  • Was the work an essential part of the Sprint Goal?
  • Is the customer able to put it to use as soon as the work is "Done"?
  • Can the Scrum Team deliver that work in the first few days of the next sprint?
  • Were there any impediments left unresolved which may have prevented this from happening?

Some of these questions may be raised. And they are valid questions. I'm not suggesting to ignore them. I am suggesting not to put them front and center.

Want to know more? Attend one of my upcoming Professional Scrum Product Owner classes!

99% of code isn't yours

$
0
0
99% of code isn't yours

Over the last years there has been an increase in reported supply chain attacks. Attacks where the attacker isn’t trying to get access to your source control repositories, but that of one of the many projects you depend on. A bitcoin wallet was compromised and sent wallet keys to a third-party domain through a nodejs package that changed ownership. Credit card details for thousands of users were intercepted through the chat client embedded in the same pages that handled transactions. And it’s not limited to websites and JavaScript apps. Asus had their laptop update tools compromised, causing specific targets to download, and install additional packages as part of driver updates.

The same dangers lurk for .NET developers. You may be asking: “how does it work, and how does it affect me?”

A supply chain attacks occurs when someone infiltrates your systems via a third-party service or dependency to exploit a vulnerability in a system. Typically, attackers try to insert malicious code into official downloads and installers of trusted third-party service providers or in dependencies used by developers. Once organizations start using these services, they are automatically exposed to the embedded malware too. Usually, the attackers are after access to source code or sensitive data, and they can access that by finding the weakest link in the software supply chain without ever having to go near their target’s servers. One of the advantages for the attackers is that with one piece of malicious code in a dependency, they can target many organizations at once. On top of that it is often difficult for organizations to detect these attacks, since they depend on many third-party services and dependencies.

That is all interesting, but that won’t happen to you, right? Well, as it turns out, it might not be as difficult for hackers to insert some malicious code into your project as you think. Here’s a small scenario: imagine you are a .NET developer within an organization, and your team is responsible for an application handling sensitive information. You want to focus on the business logic of your application instead of reinventing the wheel for every bit of code you need, so you use NuGet as a package manager. It helps you re-use code from other developers to solve some of your tasks, that way you can spend your time on your application’s specific logic.

While this is a common practice, using somebody else’s code means that you need to find a way to trust it. Do you always know what is in the packages you consume? What if one of the many dependencies you use in your project is infected with malicious code? What would be the consequences? And how would you detect this at all?

How can this happen?

It isn’t hard to be presented a different package when restoring packages across machines. This is the default behavior for most package managers, including NuGet. When you restore packages, it will try to find the versions you’re after and will do a best effort attempt to resolve issues.

## Warning NU1603: Microsoft.IdentityModel.Clients.ActiveDirectory 3.13.5 depends on System.Net.Http (>= 4.0.1) but System.Net.Http 4.0.1 was not found. An approximate best match of System.Net.Http 4.1.0 was resolved.

An example from one of the open source projects we maintain

There are a few cases in which NuGet may not be able to get the same package graph with every restore across machines. Most of these cases happen when consumers or repositories do not follow NuGet best practices:

1. nuget.config mismatch: This may lead to an inconsistent set of package repositories (or sources) across restores. Based on the packages’ version availability on these repositories, NuGet may end up resolving to different versions of the packages upon restore.

2. Intermediate versions: A missing version of the package, matching PackageReference version requirements, is published:

  • Day 1: If you specified <PackageReference Include="My.Sample.Lib" Version="4.0.0"/>but the versions available on the NuGet repositories were 4.1.0, 4.2.0 and 4.3.0, NuGet resolves to 4.1.0 because it is the nearest minimum version.
  • Day 2: Version 4.0.0 gets published. NuGet now restores version 4.0.0 because it is an exact match.

3. Package deletion: Though nuget.org does not allow package deletions, not all package repositories have this constraint. Deletion of a package version results in NuGet finding the best match when it cannot resolve to the deleted version.

4. Floating versions: When you use floating versions like <PackageReference Include="My.Sample.Lib" Version="4.*"/>, you might get different versions after new versions are available. While the intention here is to float to the latest version on every restore of packages, there are scenarios where users require the graph to be locked to a certain latest version and float to a later version, if available, only upon an explicit gesture.

5. Package content mismatch: If the same package (id and version) is present with different content across repositories, then NuGet cannot ensure the same package (with the same content hash) gets resolved every time. It also does not warn or error out in these cases.

6. Cache poisoning: NuGet will check the local package cache before checking configured package feeds (unless --no-cache is specified). These will be used in case of an exact version match. If you are using a proxy feed (such as Azure Artifacts), an attacker with access to the feed (or an upstream feed) could publish a specific version to that feed which will be used instead of the one you are expecting.

More and more re-use

If we would only depend on a few dependencies and if they would only change once in a very long while, it wouldn’t be hard to manually review the changes. If you had access to the sources. And in that case, you could copy all your dependencies to a manually curated feed. But we don’t live in that world anymore.

When you create a new Visual Studio 2019 (16.2.2) React.js Web Application project, you end up with 15214 Nodejs packages (686 with known security issues) and 284 NuGet packages (18 with known security issues. If any of them is compromised, you may be adding them to your project the next time you run npm install or dotnet restore.

Or worse, your local development machine may be fine, but the build server may be fetching all the latest versions. This is especially the case when you use the Azure Pipelines Hosted Pool, since every build uses a fresh image with very few packages pre-cached.

What we need is a way to store all our dependent packages in source control in an efficient manner, preferably without having to store all the contents of the packages in source control. Now, while that may sound like a contradiction, it isn’t. Instead of storing all package contents and that of all their dependencies, use what npm, NuGet and yarn do. These tools all store the name, exact version, and a hash of the package contents for all packages in the dependency tree in a file. This file is called a lock file, and by committing this lock file to your version control repository, you ensure that:

  1. Your build server (and your colleagues) will use exactly the same packages you used on your development machine.
  2. You keep an auditable log of all the changes to your dependency tree.
  3. You can inspect all changes to the dependencies prior to committing, or as part of the pull-request review process.

Generate lock files for .NET solutions

Your .NET projects won’t generate lock files by default. You must also upgrade your project to use the new <PackageReference> format. Then you can instruct the build process to generate the lock file through a command line parameter:

Generate the lock file through dotnet:

> dotnet restore --use-lock-file

Generate the lock file through msbuild:

> msbuild /t:restore /p: RestorePackagesWithLockFile=true

You can also add a Property to your project files to generate lock files on every restore:

<Project>     
   <PropertyGroup>         
      <RestorePackagesWithLockFile>true</RestorePackagesWithLockFile>     
   </PropertyGroup> 
</Project>

Note: This behavior is different from npm and yarn, which automatically generate the lock files each time you restore your dependencies.

NuGet will now store a packages.lock.json alongside every project. The file contains all the dependencies, their exact versions, how the dependency was introduced, and a hash of the package contents:

"Microsoft.AspNetCore.WebSockets": {
  "type": "Direct",
  "requested": "[2.2.1, )",
  "resolved": "2.2.1",
  "contentHash": "Ilk4fQ0xdVpJk1a+72thHv2LglUZPWL+vECOG3mw+gOesNx0/p56HNJXZw8k1pj8ff1cVHn8KtfvyRZxdplNQA==",
  "dependencies": {
    "Microsoft.AspNetCore.Http.Extensions": "2.2.0",
    "Microsoft.Extensions.Logging.Abstractions": "2.2.0",
    "Microsoft.Extensions.Options": "2.2.0",
    "System.Net.WebSockets.WebSocketProtocol": "4.5.3"
  }
}

Commit these files to your source control repository to store the exact dependencies along your other source files.

Restore from the lock file in your CI solution

What we want NuGet to do, is to download the exact same packages we used on our development system. Just storing your dependencies in source control isn’t enough. One of the first steps of your CI process is likely dotnet restore and unless we do something about it, this will just download a new set of dependencies and then overwrite the lock file.

Instead, we should tell NuGet to restore the exact packages specified in the lock file. And again, this can be done through a command line parameter or an msbuild property.

To restore in locked mode using dotnet:

> dotnet restore --locked-mode

Restore in locked mode using msbuild:

> msbuild /t:restore /p:RestoreLockedMode=true

To ensure the Continuous Integration server uses locked mode by default, you can also set this property in the project file:

<Project>
    <PropertyGroup>
        <RestorePackagesWithLockFile>true</RestorePackagesWithLockFile>
        <RestoreLockedMode 
            condition="'$(RestoreLockedMode)' == '' 
                && ('$(TF_BUILD)' == 'true' 
                || '$(CONTINUOUS_INTEGRATION)' == 'true')"
        >
            true
        </RestoreLockedMode>
    </PropertyGroup>
</Project>

You’re all set, your .NET projects will now restore to a predictable set of dependencies each time you build it, or the build will fail.

Each time you restore locally, you’ll see exactly which packages have been updated and you can inspect their contents on your development machine:

Restoring against a different .NET Core version may cause different package contents with the same version. This will be detected and fails your build.

Impact on build times

You may be wondering what the impact on restore times will be when turning this feature on. On the development machine restores will take longer, because the lock file must be generated and the hash for the package contents must be calculated.

On the build server it’s less clear-cut. The time to resolve package versions and calculate the dependency tree is reduced to the time it takes to just load the lock file. This may save a lot of time. On the other hand, verifying the package contents will add some time. In our tests, the average times to run the build on Azure Pipelines were faster with the locked mode turned on.

Hands-on: Try the Global DevOps Bootcamp 2019 challenge

The Global DevOps Bootcamp 2019 featured a Supply Chain Attack challenge that lets you experience the effects of a supply chain attack. As part of the hands-on lab you get to generate npm and NuGet package lock files, adapt the build process to perform locked restores, and add a scanner to your build process to detect known vulnerabilities in your dependencies. By applying these techniques, you will be able to take control over what you ship to your customers every time you deploy your latest changes.

This article is part of XPRT. magazine #9.
Get your free copy or download XPRT. magazine

99% of code isn't yours

Banner photo used under creative commons.

Many funny agile movies

$
0
0
Many funny agile movies

Some people just happen to learn more from short movie clips. And they can be a great way to bring some humor in.

Please add your suggestions in the comments below.

Scrum for Schmucks
My brutally honest summary of the Scrum framework for the typical dumb schmuck in the street.
Many funny agile movies
Being Agile is our favourite thing
ThoughtWorks UK’s Agile advert, a tribute to Julie Andrews and OK Go!
Many funny agile movies
I want to run an agile project
***Watch the sequel at http://youtu.be/lAf3q13uUpE *** In this movie “I want to run an agile project” we follow the experiences of one such brave project lea...
Many funny agile movies
I want to run an agile project, part 2
In this movie “I want to run an agile project, part II” we follow the continuous journey of our brave project leader, Luke, as he has a new Agile opportunity...
Many funny agile movies
Deathstar Project Deployment Meeting
Management is not happy when end user testing suggests delaying the deployment of the Deathstar so that a critial defect can be corrected
Many funny agile movies
A Conference Call in Real Life
WE WANT TO PERFORM AT YOUR CORPORATE EVENT: http://bit.ly/2fGo5ri OR HELP YOUR COMPANY MAKE LESS BORING VIDEOS: http://bit.ly/2gCiL9r FOLLOW ON INSTAGRAM: ht...
Many funny agile movies
A Video Conference Call in Real Life
WE WANT TO PERFORM AT YOUR CORPORATE EVENT: http://bit.ly/2fGo5ri OR HELP YOUR COMPANY MAKE LESS BORING VIDEOS: http://bit.ly/2gCiL9r OUR PODCAST: http://app...
Many funny agile movies
Email in Real Life
BRING US TO YOUR CORPORATE EVENT: http://bit.ly/2fGo5ri OR LET US HELP YOUR COMPANY MAKE LESS BORING VIDEOS: http://bit.ly/2gCiL9r FOLLOW ON INSTAGRAM: https...
Many funny agile movies
De Expert (Korte Comedy sketch)
Abonneer voor meer korte komedie clips en films: http://bit.ly/laurisb Grappige bedrijfsvergaderingen die laten zien hoe lastig het is voor een ingenieur om ...
Many funny agile movies
De expert: De verkeerde hoek (korte comedy sketch)
Project Vierkant Ep1. Weer een dag in het leven van Anderson, een ingenieur die probeert in de bedrijfswereld te overleven. Een grappige video over de nuance...
Many funny agile movies
The Expert: Progress Meeting (Short Comedy Sketch)
Square Project Ep3. Funny business meeting illustrating how hard it is for an engineer to fit into the corporate world! Another day of Anderson navigating th...
Many funny agile movies
The Expert: IT Support (Short Comedy Sketch)
Square Project Ep2. A funny video about a phone call to IT Support. Another day in the life of Anderson, an engineer trying to fit into the corporate world a...
Many funny agile movies
Exact Instructions Challenge - THIS is why my kids hate me. | Josh Darnit
Exact Instructions Challenge PB&J edition Another Challenge with Johnna and Evan: https://www.youtube.com/watch?v=sLaVM6af-RE&t=121s We asked the kids to wri...
Many funny agile movies
Exact Instructions Challenge - Ramen Edition | Josh Darnit
Exact Instructions Challenge Ramen Noodles Edition. Check out the original PB&J edition here: http://bit.ly/2pChgjA Sing along on our Helium Challenge here: ...
Many funny agile movies
Exact Instructions Challenge Drawing Edition | Josh Darnit
Exact Instructions Drawing Edition, this time I wanted to try to actually get it right. How do you think we did? Pass or Fail? Check out the original Exact I...
Many funny agile movies
The Wrong way to do Agile: Specifications
Chet Rong shows on the Rong Way to do requirements. Follow @ChetRong on Twitter! (Just don’t follow his advice.) Visit atlassian.com/agile for more on doing ...
Many funny agile movies
The Wrong way to do Agile: Stand-ups
Chet Rong shows on the Rong Way to do stand-ups. Follow @ChetRong on Twitter! (Just don’t follow his advice.) Visit atlassian.com/agile for more on doing agi...
Many funny agile movies
The Rong way to do Agile: Planning
Chet Rong shows on the Rong Way to do planning and estimation. Follow @ChetRong on Twitter! (Just don’t follow his advice.) Visit atlassian.com/agile for mor...
Many funny agile movies
The Wrong way to do Agile: Team Structure
Chet Rong shows on the Rong Way to do team structure. Follow @ChetRong on Twitter! (Just don’t follow his advice.) Visit atlassian.com/agile for more on doin...
Many funny agile movies
The Wrong way to do Agile: Retrospectives
Chet Rong shows on the Rong Way to do retrospectives. Follow @ChetRong on Twitter! (Just don’t follow his advice.) Visit atlassian.com/agile for more on doin...
Many funny agile movies
Spooning By Bitbucket - komik video
http://www.bi9.net - http://www.bi9.net/kapak-sozler - komik videolar
Many funny agile movies
“Shit Bad Scrum Masters Say”
Don’t be a bad ScrumMaster! Grab a hilarious (free) new activity for your team’s next Retrospective: https://weisbart.com/free-agile-adlibs
Many funny agile movies
“Opposite Day” - How NOT to Apply Scrum at Your Office
Here’s a little example of scrum gone amuck. It’s funny here, but it’s sad when it happens at your workplace. We had fun showing this video at Global Scrum G...
Many funny agile movies
Scrum According Silicon Valley
Scrum from Silicon Valley (HBO)
Many funny agile movies
The Big Bang Theory - The Military Miniaturization S10E02 [1080p]
All Rights to Warner Bros. Television & CBS!
Many funny agile movies

Caution

The videos below can be considered tasteless or harmful. Watch/use at your own risk.

Hitler the Agile Coach
I see the concept of “water scrum fall” all the time, but I wonder what Hitler would say of it if he was an Agile coach. Watch and see....
Many funny agile movies
Hitler at a sprint review
What would happen if Adolf Hitler attended a Scrum sprint review? Well sit back and enjoy...
Many funny agile movies

You can find the whole playlist on YouTube as well.

Photo credit: Beat Ernst.

My tools of trade

$
0
0
My tools of trade

When I got the Dell I had created a Boxstarter script, but I hadn't kept it up-to-date since. I don't re-image my laptop often enough to warrant the creation it seems, but the more time I spend on the command line, the more I'm getting used to all of this.

So far I'm super happy with this new beast. Great screen, lots of power and a lot of memory. Be on the lookout, when I bought this Lenovo had a discount of more than €700.

My tools of trade

Inspired by Scott Hanselman's 2014 Ultimate Developer and Power Users Tool List for Windows. This is my list of tools and configuration changes. I'd be interested to hear what alternatives you are using and why. Which tools I should have heard of ages ago and how I might optimize my workhorse even further. Leave a comment below!

Still looking for:

  • A better way to combine all my different communication channels into one. Slack, Teams, Mail, Zoom, WhatsApp, Kaizala, SMS... There's just too many of them!
  • A really versatile on-screen timer that understands presenter mode for use in training. Set a time-box on my laptop, show it as an overlay on the main screen.
  • A soundboard app/device that can play sound effects during classes and presentations.

The OS

  • Windows 10 2004 - I need WSL2 to run docker on WSL and since 2004 is essentially finished and no longer shows the watermark; it's a no-brainer. I used the ISO to upgrade upgrade Lenovo's default installation in-place.
  • WSL2 - WSL2 is a lot faster than WSL1, comes with full docker support, so you no longer need to run docker inside Hyper-V and adds a whole slew of features to Visual Studio Code.
  • Docker Tech Preview - Adds Docker Desktop on Windows and can host itself in WSL2.

Things I change:

Disable-WindowsOptionalFeature -FeatureName WorkFolders-Client
Disable-WindowsOptionalFeature -FeatureName SMB1Protocol
Disable-WindowsOptionalFeature -FeatureName Printing-Foundation-InternetPrinting-Client
Get-WindowsCapability -Online -Name "XPS.Viewer*" | Remove-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Browser.InternetExplorer*" | Remove-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Print.Fax.Scan*" | Remove-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Media.WindowsMediaPlayer*" | Remove-WindowsCapability -Online
Get-WindowsCapability -Online -Name "App.Support.QuickAssist*" | Remove-WindowsCapability -Online

Get-WindowsCapability -Online -Name "Language.*nl-NL*" | Add-WindowsCapability -Online
Get-WindowsCapability -Online -Name "App.WirelessDisplay.Connect**" | Add-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Hello.Face.*" | Add-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Language.*en-US*" | Add-WindowsCapability -Online
Get-WindowsCapability -Online -Name "Language.*en-GB*" | Add-WindowsCapability -Online
Enable-WindowsOptionalFeature -FeatureName TelnetClient
Enable-WindowsOptionalFeature -FeatureName SmbDirect
Enable-WindowsOptionalFeature -FeatureName Client-ProjFS
  • Uninstall a whole bunch of default store apps: Messages, News, Mail (Lenovo hadn't installed many of them in the first place).
Get-AppxPackage *getstarted* | Remove-AppxPackage
Get-AppxPackage *officehub* | Remove-AppxPackage
Get-AppxPackage *3dbuilder* | Remove-AppxPackage
Get-AppxPackage *windowscommunicationsapps* | Remove-AppxPackage
Get-AppxPackage *getstarted* | Remove-AppxPackage
Get-AppxPackage *skypeapp* | Remove-AppxPackage
Get-AppxPackage *solitairecollection* | Remove-AppxPackage
Get-AppxPackage *zunevideo* | Remove-AppxPackage
Get-AppxPackage *bing* | Remove-AppxPackage
Get-AppxPackage *messaging* | Remove-AppxPackage
Get-AppxPackage *Microsoft.people* | Remove-AppxPackage
Get-AppxPackage *ZuneMusic* | Remove-AppxPackage
  • Default all my printers to A4 paper size
get-printer | %{ set-printconfiguration -printerobject $_ -Papersize "a4" }

The Web

  • 1Password - The company I work for provides free 1Password for Business and Family accounts. I'd be stupid not to use them :). In the past I've been a happy long-time LastPass user.
  • Edge Chromium Beta - Fast, well integrated with Azure Active Directory and constantly improving. Haven't had any issues on the beta channel, can't wait for the stable releases.

Navigate to the Chrome Extension Store to add it as a trusted source to Edge. Then I installed:

  • 1Password X - This version of the extension works with Edge Chromium.
  • Send to Kindle - Send articles and long docs pages to my Kindle for on-the-go reading or to bring them to an interruption-free device.
  • DuckDuckGo Privacy Essentials - Edge Chomium already has pretty good privacy protection built in. DuckDuckGo is my default search engine.

Writing Code

Writing and reviewing code is still a large part of my work. Over time I've collected a large set of tools to make my life easier.

Visual Studio 2019 Enterprise - As Microsoft MVP I  get a complementary license for Visual Studio 2019. It has been a long-time friend and the 2019 version has seen many performance, usability and stability improvements.

I extend it with a whole bunch of extensions. The Roaming Extension Manager makes it super efficient to grab them without having to look them all up:

My tools of trade
Use the Roaming Extension Manager to quickly reinstall all your favorite extensions.
  • OzCode -  an advanced debugging extension for .NET developers which makes it easier to debug issues through advanced flow analytics, predictions and visualizations.
  • Productivity Power Tools 2017/2019 - Somehow I often end up manually editing project files, putting pretty code snippets in articles and presentations etc. More and more of these features are becoming standard features of the IDE, until then I probably can't do without these extensions.
  • Web Essentials 2019 - If you're doing Web Development in Visual Studio, these extensions just add so many nice features, it's a must have if you do anything with HTML, CSS or JavaScript.
  • .ignore - Makes it super easy to drop a default ignore file into your repository based on the languages and tools you're using.
  • Trailing Whitespace Visualizer - "You can't see it, why bother," some people say, but I've had to do some pretty weird merges due to trailing white space or weird white spaces in the code. This just takes away the excuse to leave them in.
  • Learn The Shortcut - When you see my new keyboard, you'll understand why I'd want to know how to do stuff with my keyboard. This extension shows the keyboard shortcut in the status bar every time you do something with your mouse while a short-cut would have sufficed.
My tools of trade
Ultimate Hacking Keyboard in Colemak layout.
  • Whack Whack Terminal - I've used the Package Management Console inside Visual Studio for a long while to do console operations from within Visual Studio, but it has its issues. This extension solves all of these issues. No more "Command Prompt Here" from the Solution Explorer for me.
  • LiveShare - Remote workers are the future for a sustainable world and a sustainable life in the 24/4 economy. Collaboration is key for agility, continuous learning and fast time to recover in case of failure. LiveShare is a key technology which enables development teams to work together over a low-bandwidth connection in a very high fidelity way.
  • Git Web Links - Ever had a file open in Visual Studio and wanted to send a selection to a co-worker over Teams? Or drop a link to a problematic piece of code in a Jira issue? This extension can detect many of the code hosting platforms and put a link to your code file or selection straight on your clipboard.

Visual Studio Code - Even with all the power of Visual Studio "Proper" as many tend to call it, I love the speed and simplicity of Visual Studio Code for quick edits, reading files and for working with many non-Microsoft technologies tor which code has almost an unlimited number of extensions. Before switching to Code I used Sublime Text or Atom, even Notepad(++), but Visual Studio Code has replaced all of them for me.

Like Visual Studio 2019 Enterprise, I've extended Code with a whole bunch of extensions to fit my needs even better.

  • LiveShare - Remote workers are the future for a sustainable world and a sustainable life in the 24/4 economy. Collaboration is key for agility, continuous learning and fast time to recover in case of failure. LiveShare is a key technology which enables development teams to work together over a low-bandwidth connection in a very high fidelity way.
  • Remote Development Pack - WSL, Containers - integrates Visual Studio Code deeply into the Windows Subsystem for Linux and allows you to open the file system of a container, debug processes inside it and do other fancy things for when your containers aren't doing what they are supposed to do.
  • REST Client - A postman like client inside of Visual Studio Code which allows you to call and debug REST APIs.
  • Azure - Account, CLI, Pipelines, Repos - Azure and Azure DevOps integration inside Visual Studio Code.
  • Language support - C#, NodeJS, Fish, PowerShell, Docker, .gitIgnore, JavaScript and TypeScript- Visual Studio Code supports just about any language out there with tons of extensions to make your life easier.

Other Development utilities I can't do without.

  • dotPeek - Many of my tools integrate or extend Azure DevOps. Its APIs are somewhat documented, but in many cases I tend to run into the weirder details of how this product works. Digging through the code of the actual thing you're calling into can be a tremendous help. I install a local copy of Azure DevOps Server 2019 to always have the latest binaries at hand. It's not 100% the same as what runs in the cloud, but often close enough. Other options to try: ILSpy, .NET Reflector.
  • Tower - I've tried many Git clients over the past few years. When I'm in Visual Studio I tend to use Team Explorer for most Git related things. But outside of Visual Studio I use either the raw CLI or Tower. Tower's interactive rebase features make it much easier to clean up your messes before pushing and Tower seems to strike the correct balance between features and usability. Tower is a paid product, but as a Microsoft MVP I receive a free subscription. Other Git clients I used: Git Kraken, SourceTree. See my blog posts on how to integrate Tower with WSL and Visual Studio 2019.
  • Fiddler - A web debugging proxy that works with just about any tool on your system. I use the Chrome/Edge Web Development Tools extensively to understand how the browser communicates with Azure DevOps as part of extension development. But when you're building a task for Azure Pipelines you don't have such a luxury. Until you install Fiddler, that is.

On the Commandline

Windows terminal - I used to be a barebones console user. Firing up cmd or powershell was how I did most of my console work. Until I started using WSL. Now all my consoles must look fancy and the new Windows Terminal Preview sure helps with that. I've tried other Consoles like ConEmu, cmder.

Windows Console

  • Git for Windows - While I do a lot of work on WSL, I do need Git on the windows side of things as well. And I love how Git for Windows now integrates with the Windows credential and Certificate stores. This makes it so much easier to work with weird enterprise Git servers and web proxies.
  • Azure CLI and Azure DevOps CLI - Manage Azure and Azure DevOps from the CLI. Or better yet, use it to setup training and demo environments.

Windows Subsystem for Linux 2

  • Ubuntu - Ubuntu was one of the first distros to ship with WSL when it was first released. It has made it my default Lnux distro on WSL and it has stayed that way.
  • Git, Git-LFS - These need no introduction.
add-apt-repository ppa:git-core/ppa 
apt update
apt install git

curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
sudo apt-get install git-lfs
git lfs install
  • NodeJS, Yarn, NPM - Azure Pipelines extensions run on Node. WSL allows me to quickly test and debug my tasks straight from Visual Studio Code.
curl -sL https://deb.nodesource.com/setup_13.x | sudo -E bash -
sudo apt-get install -y nodejs

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt update
sudo apt install yarn
  • Fish, Ohh-My-Fish, Powerline - While learning WSL I found that  lot of bash-bashing was happening. Many people pointed me to Fish. And Powerline adds a lot of context information to your console, especially if you spend your time inside a git repository a lot.
My tools of trade
Make your console look pretty with Powerline
# Fish
sudo apt-add-repository ppa:fish-shell/release-3
sudo apt-get update
sudo apt-get install fish
chsh -s `which fish`

# OMF
curl -L https://get.oh-my.fish | fish

# Powerline
sudo apt install python3
sudo apt install pip
pip install --user powerline-status

# Integrate Poweline in Fish
set fish_function_path $fish_function_path "/usr/share/powerline/bindings/fish"
source /usr/share/powerline/bindings/fish/powerline-setup.fish
powerline-setup

# Set theme for fish
omf install bobthefish
  • Hub - A wrapper for Git that adds whole bunch of commands specifically for GitHub. Create pull-requests, issues and other things straight from the console.
sudo add-apt-repository ppa:cpick/hub
sudo apt-get update
sudo apt-get install hub
  • Azure CLI and Azure DevOps CLI - Manage Azure and Azure DevOps from the CLI. Or better yet, use it to setup training and demo environments.
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az extension add --name azure-devops

PowerShell

Like Fish, PowerShell can be dressed up with support for Git Status and Powerline fonts.

  • PowerShell Core 6 - PowerShell has seen a lot of development an improvements with the push for cross-platform availability and .NET core.
  • PoSH Git, Oh-My-Posh - Fancy consoles are not unique to the Linux world:
My tools of trade
Dress up your PowerShell prompt using poshgit and om-my-posh
PowerShellGet\Install-Module posh-git -Scope CurrentUser -AllowPrerelease -Force
Install-Module oh-my-posh -Scope CurrentUser
Set-Prompt
Set-Theme Fish

To load this customization by default, update your PowerShell profile.

  • Hub - Hub isn't just available on Linux. Manage GitHub just as easily from PowerShell!

Communications

Xpirit can't make up its mind about the best communications tool. Depending on who you ask, they prefer something else. Add to that our mother company Xebia, the partners I regularly work with: Microsoft and Scrum.org and the number of icons in my task tray just explodes to stay in contact with everyone.

  • Slack, WhatsApp Web, Teams, Zoom...

Way back when I used a chat client that integrated everything. The end result: you get all of the basic features everywhere and none of the fancy features. But with all of the clients installed it's sometimes impossible to find an old conversation or track a conversation that went from WhatsApp to Slack to email and back.

Office

While everything-as-code is the rule in DevOps,  my documents and presentations are built in Office and distributed in PDF format. I know there is a lot possible with Markdown and Reveal.JS, many clients don't appreciate it.

  • Office 365 - Already installed by Lenovo, automatically activated when I joined my machine to Azure Active Directory. It remains the standard. Outlook, Word and PowerPoint.
  • Acrobat Reader - While browsers can generally display PDF documents just fine, annotations, corrections, form-filling and other more advanced features still require Acrobat Reader or an alternative to be installed.

Our Amsterdam Office has a ClickShare installed in all the training rooms. A few of my clients I regularly visit to teach Scrum have AirMedia displays, Chrome casts and other wireless display technologies.

  • Barco ClickShare Extension Pack - Adds "Extend Display" and PowerPoint Presenter mode. By default ClickShare dongles only let you mirror your screen.
  • Crestron AirMedia - Many of the installations I encounter haven't received an update in forever and they serve a pretty old version of the client software when you connect. To always get the best experience I manually install the client from Crestron's support pages.
Tip: I carry a 4K HDMI Mining dongle to create a fake screen to mirror. This will allow yo to use Presenter Mode with many of these devices, even if they don't natively support Extend Display.
Get yours on Amazon.com.

Learning & Relaxing

  • Kindle - I'm an avid reader, hitting 50 books/year was normal in the past. 2 kids have reduced my average considerably, unless I count every Miffy book I've read a gazillion times. I also used to be a big hardcover fan and would buy every book to keep it with me... That is, until I got my hands on a Kindle Touch. Which has been upgraded to a Kindle Paperwhite and later, after a carry-on accident, to a Kindle Voyage. I prefer reading on the device itself, but sometimes, when I have a couple of minutes to kill, I may open up Kindle on my PC.
  • Send To Kindle - Every Kindle device has a magic email address to which you can send ebooks, documents and other things you may want to read on your Kindle. But with Send to Kindle you also get a special printer and explorer integration, making it a 2-click experience. This allows me to quickly send a document to my Kindle for reading in a silent corner.
  • Audible - The same love for reading made me search for ways to be more productive in the car from home to work. There are days I spend 2 to 3 hours a day in my car and it started t irritate me to hear the same news clips 4 times per hour. Kindle's Whisper sync allows me to read a book at night and just put it on the night stand. Then my phone will automatically continue where I left off in the car. It works really well for me for thrillers, fantasy and science-fiction, not as well for work related titles.
  • Pocket Casts Desktop - When not listening to a book on my way to work, or when I'm doing shorter commutes I switch to Pocket Casts to listen to podcasts. Pocket Casts neatly syncs positions between the PC app and the android app and can fetch episodes for offline listening (on a plane for example).
  • Calibre e-book management - I've owned a Kindle for about 10 years now and collected a whole bunch of free and paid ebooks. I use Calibre to manage my ebook library and to convert ePub books to Amazon's native mobi-pocket format when needed. Calibre supports every ebook format and almost every ebook reader and makes it a breeze to backup your book library in a central place. A short search will also point you to a bunch of plugins that can remove DRM from books.
  • Feedly.com - Not something installed to my PC, but it is something I check almost daily. Ever since I started in college I started reading blogs. Feedly became what Google Reader used to be for me.

Graphics & Video

  • SnagIt - In presentations, training material, blog posts, StackOverflow answers, product feedback... a picture says more than a 1000 words. And a movie sometimes says more than 100 pictures. SnagIt is a powerful screen capture utility that can capture screenshots of whole web pages, movie clips of your whole screen or an area of it and then allows you to annotate/anonymize these. Other tools exist that do a very similar job, some free like GreenShot. SnagIt is a paid product, but as Microsoft MVP I get a free personal license.
  • Camtasia - In online training material, a raw screen capture is usually not enough. When narrating the material you may end up having to speed up or slow down certain areas of your material, highlight other areas and possibly hide some information. Camtasia is a powerful multi-track video editing solution that ticks all of these boxes. It's not as powerful as Adobe AfterEffects or Premiere, yet has a number of specific features that make it ideal for working with training material based on screen captures. Camtasia is a paid product, but as Microsoft MVP I get a free personal license.
  • Handbrake - This nifty free utility can convert video files from just about any format to just about any format. Whether you downloaded the file, or are converting a DVD or Blu-Ray. Really useful to convert your Camtasia project to different formats or to put your kid's favorite stories on your laptop while travelling.
  • fre:ac - Similar to Handbrake, but this time for audio.
  • Creative Cloud Photography Plan - While there are many great free image editors out there, I've been a faithful user of Photoshop and Lightroom ever since I got a license as a student. The Photography plan combines Lightroom and Photoshop in a single package and 500px tends to have a yearly promotion that adds a 1 year plan to it (or vice versa, depending on how you look at it).
  • Autodesk Sketchbook - Sketchbook came with a Samsung Galaxy Note 4 I used years ago and is now free for the PC. It's a free-hand drawing tool with many cool features. It works best when you have a pen or Wacom tablet. This Lenovo X1 Extreme can be used with a pen, but I haven't been able to draw with it yet, since the pen is in back-order and isn't expected to ship until February 2010.
  • Microsoft Whiteboard - A simplified version of Sketchbook, but with the ability to draw together over the Internet. Super useful for teams and works even when you're using a normal phone-conference system instead of Teams or Slack.
  • VLC - Play any video and audio file without fuss. Super simple UI, fast, no nonsense. And most importantly: no need for codec packs.

File Sharing

Many of my clients use OneDrive for Business. Xpirit uses DropBox for Business and Scrum.org stores many of its documents on Google Drive. And before you know it you have all of these clients syncing data. I don't have a clear favorite.

Utilities

  • 7-zip - This extremely simple archive compression utility can extract just about any format out there in the world. It can also create a number of different archive formats. Its native 7z format has one of the best compression ratios in the world. Apart from the UI, 7z comes with a powerful command line utility. The one gripe I have with it, is that the command line switches are hard to memorize. I google "jessehouwing recursive extract 7z" too often to get back to my own SuperUser.com answer.
  • Tunnelbear - ROAAAAR! Every time you connect this little bear roars to tell you a secure connection has been established. Available for any platform and you get free bytes by tweeting.
  • f.lux - To help me sleep and reduce eye strain I use f.lux on my PC and twilight on my mobile. These tools reduce the amount of blue light emitted by the screen to help your body produce enough Melatonin which in turn helps you to fall asleep more easily. Windows 10 has a native Night Light feature, but it isn't as powerful as f.lux. Lenovo Vantage has a Eye Care mode, but I haven't really invesitigated that yet.
  • FAR Manager - I grew up with Norton Commander and RAR Archiver on the DOS command line. That RAR archiver came with a Norton commander style UI and the author went on to create a complete Norton Commander clone for Windows. It supports a multitude of file systems, can navigate network drives, FTP servers and more. When I'm not using Windows Explorer to manage my documents, FAR Manager is my go-to solution.
  • Unchecky - Viruses and malware are bad, but many "free utilities" are a nuisance too! Unchecky sees when you run an installer and automatically unchecks any optional bloatware that may be trying to sneak its way onto our computer.
  • Rufus - Flash a bootable disk image to your USB storage device and use it to install a new operating system, run recovery tools or launch Windows To Go and have your own configuration from a supported USB key.
  • SysinternalsSuite - I can't count the number of times Process Monitor or Process Explorer have helped me debug buggy applications, performance issues or other problems; whether they were my own doing or someone else's.
  • CDBurnerXP -  It doesn't happen very often, but I do need to put music on a CD or burn a video clip to a DVD or create a more permanent backup of the family's photo archive on a blu-ray disk. CDBurnerXP is a very simple application that supports all of that but without too many bells an whistles. It doesn't just burn CD's, but also DVD, HD-DVD and Blu-Ray. I've combined it with a USB Blu-ray drive that hooks up to my workstation or my laptop with ease.
  • EarTrumpet - My laptop has its own speakers and headphone jack. My headset is USB powered. My screen has built-in Display Audio and I have a USB soundbar on my desk. Ohh and the dock has a headphone jack too. When my laptop is connected to the dock that amounts to at least 6 different audio devices fighting for control of the volume. The standard Windows volume slider outputs all audio to the device you select. With EarTrumpet you can play children's music on the soundbar, put your conference call on yous headset and adjust the volume of individual applications with ease.

Lego

  • Stud.io - A 3D design application for your own Lego creations. I'm currently working on a Lego design for our future home.

Configure Tower to use the new Windows Terminal

$
0
0
Configure Tower to use the new Windows Terminal

Windows Terminal now supports command-line arguments to open specific terminals and in specific directories. Which means you can now configure other tools to launch specific terminals in the directory you're in.

To configure Tower to launch the Windows Terminal is now simple:

Title:     Windows Terminal
Path:      C:\Users\{username}\AppData\Local\Microsoft\WindowsApps\wt.exe
Arguments: -d . 
Configure Tower to use the new Windows Terminal
Adds Windows Terminal to Tower

To launch specific profiles, add: -p {profile name}.

To easily find the location of Windows Terminal, run where wt from a command line:

Configure Tower to use the new Windows Terminal
Find the path to your wt executable.
Configure Tower to use the new Windows Terminal
It worky-worky!

Package Feeds consuming most data in Azure DevOps Server

$
0
0
Package Feeds consuming most data in Azure DevOps Server

You'll find a few SQL statements floating around that can help you with these types of questions, but these don't list the Package Feed data usage. After some spelunking in the my local  server installation I cobbled together the following statement to dig a little bit deeper:

select 
   [f].FeedName, 
   sum(cast([list].BlockFileLength as decimal(38)))/1024.0/1024.0 AS SizeInMb
from
   BlobStore.tbl_Blob [blob]
   join BlobStore.tbl_BlockList [list] on [list].BlobId = [blob].BlobId
   join [Feed].[tbl_PackageVersionIndex] [fd] on '0x'+[fd].StorageId = CONVERT(varchar(max),blob.BlobId ,1) 
   join [Feed].[tbl_Feed] [f] on [fd].FeedId = [f].FeedId
   join [Feed].[tbl_PackageIndex] [p] on [p].PackageId = [fd].PackageId
group by
  [f].FeedName
order by 
  SizeInMb desc

select
   [f].FeedName, 
   [p].PackageName, 
   sum(cast([list].BlockFileLength as decimal(38)))/1024.0/1024.0 AS SizeInMb,
   (select count(pvi.PackageVersionId) from [Feed].[tbl_PackageVersionIndex] [pvi]
     where pvi.FeedId = f.FeedId and pvi.PackageId = p.PackageId) as Versions
from
   BlobStore.tbl_Blob [blob]
   join BlobStore.tbl_BlockList [list] on [list].BlobId = [blob].BlobId
   join [Feed].[tbl_PackageVersionIndex] [fd] on '0x'+[fd].StorageId = CONVERT(varchar(max),blob.BlobId ,1) 
   join [Feed].[tbl_Feed] [f] on [fd].FeedId = [f].FeedId
   join [Feed].[tbl_PackageIndex] [p] on [p].PackageId = [fd].PackageId

group by    
   [f].FeedName, 
   [p].PackageName,
   f.FeedId,
   p.PackageId

order by SizeInMb desc

The outcome is a list of feeds and their total consumption as well as a list of feeds decomposed by the different packages in that feed.

Package Feeds consuming most data in Azure DevOps Server
Data usage per feed and per package

From here further exploration should be a piece of cake!

Photo used under creative commons.

Enable your custom background on Microsoft Teams

$
0
0
Update: Works on mac too. Thanks Albert Brand!
Enable your custom background on Microsoft Teams

With a lot of people working from home now, we're giving the world a peek into our homes. It may not always be the most representative. You may not have a dedicated room (like me) and sit at the kitchen table. The option to inject a picture of your office or your favorite spot in the mountains is super useful.

Enable your custom background on Microsoft Teams

And they note that custom images are coming "soon". It turns out it's here already, but may not have the required user interface elements yet.

I stumbled upon this little tweet on how to do this:

To add your own images. Make sure they have the following dimension: 1920x1080.

Enable your custom background on Microsoft Teams

Then, the only thing you need to do is to save the image in the following (hidden) directory in jpg format in Windows:

%APPDATA%\Microsoft\Teams\Backgrounds\Uploads


And on a Mac:

/Users/<account>/Library/Application Support/Microsoft/Teams/Backgrounds/Uploads
Enable your custom background on Microsoft Teams

then enable your custom background from your meeting settings:

Enable your custom background on Microsoft Teams

Your custom images will show up at the bottom of the list.

Enable your custom background on Microsoft Teams

And as you can see, this puts me back in the office, with my virtual colleagues. Or in a fun Fantasy setting!

Show your cool backgrounds in the comments!


What to do when your build hangs on the Hosted Pool...

$
0
0
What to do when your build hangs on the Hosted Pool...

Both GitHub actions and Azure Pipelines offer the ability tor un your CI pipeline in the cloud. These agents are provisioned in Azure and fully maintained by Microsoft. These hosted runners are cost effective, require no maintenance and are basically free to use in many cases.

But what to do when your build just freezes and stops responding?

If this were your own runner, or if it's hosted by your company, you could probably remote into the machine to see what's going on. Unfortunately, that's no option with the hosted runners.

After cancelling the build you could inspect the logs for any hints, but they may not reveal anything useful either.

Now what?!

What I really wanted, was to have a quick peek at the desktop of the agent. A screenshot would do... So that is what I set out to accomplish. A quick google gave me 9 different command-line tools to grab a screenshot of the desktop and one stood out for its ease of use screenshot-cmd. It's a simple portable executable that can be downloaded directly. So I cobbled up a little PowerShell to download and run it and add the screenshot to the logs:

Invoke-WebRequest -Uri "https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/screenshot-cmd/screenshot-cmd.exe" -OutFile "$(Agent.ToolsDirectory)\screenshot-cmd.exe"

& "$(Agent.ToolsDirectory)\screenshot-cmd.exe"

Write-Host "##vso[task.uploadfile]$(Agent.ToolsDirectory)\screenshot.png"

and got a screenshot in the logs:

What to do when your build hangs on the Hosted Pool...

Telling me... nothing...

What to do when your build hangs on the Hosted Pool...
A screenshot of the PowerShell console my little tool was launched from.

And some more StackOverflow hunting lead me to a snippet to minimize that console.

Add-Type -Name ConsoleUtils -Namespace WPIA -MemberDefinition @'
      [DllImport("Kernel32.dll")]
      public static extern IntPtr GetConsoleWindow();
      [DllImport("user32.dll")]
      public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
   '@
   
$ConsoleMode = @{
    HIDDEN = 0;
    NORMAL = 1;
    MINIMIZED = 2;
    MAXIMIZED = 3;
    SHOW = 5
    RESTORE = 9
}
   
$hWnd = [WPIA.ConsoleUtils]::GetConsoleWindow()
[WPIA.ConsoleUtils]::ShowWindow($hWnd, $ConsoleMode.MINIMIZED)

Which finally revealed:

What to do when your build hangs on the Hosted Pool...
The Just-in-Time Debugger is still enabled on the Visual studio 2017 agent.

Disabling the Just-in-time debugger is a simple matter of resetting a couple of registry keys. And now the build no longer freezes and is telling me what's wrong.

& reg add "HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AeDebug" /v Debugger /d - /t REG_SZ /f
& reg add "HKLM\SOFTWARE\Microsoft\.NETFramework" /v DbgManagedDebugger /d - /t REG_SZ /f
& reg add "HKCU\Software\Microsoft\Windows\Windows Error Reporting" /v DontShowUI /d 1 /t REG_DWORD /f

Bliss!

Putting it all together

The final script looks like this (in YAML):

steps:
- powershell: |
   Add-Type -Name ConsoleUtils -Namespace WPIA -MemberDefinition @'
      [DllImport("Kernel32.dll")]
      public static extern IntPtr GetConsoleWindow();
      [DllImport("user32.dll")]
      public static extern bool ShowWindow(IntPtr hWnd, Int32 nCmdShow);
   '@
   
   $ConsoleMode = @{
    HIDDEN = 0;
    NORMAL = 1;
    MINIMIZED = 2;
    MAXIMIZED = 3;
    SHOW = 5
    RESTORE = 9
    }
   
   $hWnd = [WPIA.ConsoleUtils]::GetConsoleWindow()
   
   $a = [WPIA.ConsoleUtils]::ShowWindow($hWnd, $ConsoleMode.MINIMIZED)
   
   Invoke-WebRequest -Uri "https://storage.googleapis.com/google-code-archive-downloads/v2/code.google.com/screenshot-cmd/screenshot-cmd.exe" -OutFile "$(Agent.ToolsDirectory)\screenshot-cmd.exe"
   & "$(Agent.ToolsDirectory)\screenshot-cmd.exe"
   Write-Host "##vso[task.uploadfile]$(Agent.ToolsDirectory)\screenshot.png"
  workingDirectory: '$(Agent.ToolsDirectory)'
  displayName: 'PowerShell Script'
  condition: always()

The condition: always() ensures that the screenshot is taken after a request to cancel the build has been sent.

And this is the UI based equivalent:

What to do when your build hangs on the Hosted Pool...

Want a fancier solution?

Then grab the extension from the Azure DevOps Marketplace,

Agent Screenshot - Visual Studio Marketplace
Extension for Azure DevOps - Ever wondered what is happening on the agent?
What to do when your build hangs on the Hosted Pool...

97 Things Every Scrum Practitioner Should Know

$
0
0
97 Things Every Scrum Practitioner Should Know

Gunther shepherded a big group of Scrum practitioners from all across the globe to work together on this collection of 97 stories about Scrum. And I'm proud he included me in his herd together with not just fellow Scrum practitioners, but friends I made across the globe through the Scrum community worldwide and especially the Scrum.org trainer community and the nlScrum Dutch Scrum Meetup.

The book will be released by O'Reilly in e-book form on May 4th 2020 and the paper version should start shipping about a month later.

Order your copy now

Amazon.com

97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts: Verheyen, Gunther: 9781492073840: Amazon.com: Books
97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts [Verheyen, Gunther] on Amazon.com. *FREE* shipping on qualifying offers. 97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts
97 Things Every Scrum Practitioner Should Know

Amazon.de

97 Things Every Scrum Practitioner Should Know: Collective Wisdom from - Verheyen, Gunther - Amazon.de: Bücher
97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts | Verheyen, Gunther | ISBN: 9781492073840 | Kostenloser Versand für alle Bücher mit Versand und Verkauf duch Amazon.
97 Things Every Scrum Practitioner Should Know

Amazon.co.uk

97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts: Amazon.co.uk: Verheyen, Gunther: 9781492073840: Books
Buy 97 Things Every Scrum Practitioner Should Know: Collective Wisdom from the Experts by Verheyen, Gunther (ISBN: 9781492073840) from Amazon’s Book Store. Everyday low prices and free delivery on eligible orders.
97 Things Every Scrum Practitioner Should Know

Create an Organization level feed in Azure Artifacts

$
0
0
Create an Organization level feed in Azure Artifacts

The official guidance is to create Project level feeds, unless you know what you're doing. And to discourage users, you need to create a new feed using the REST API instead.

Can No Longer Create Organization-scoped Feeds - Developer Community
Developer Community for Visual Studio Product family

Exactly how, the issue doesn't explain. So let me show you how I did it.

1. Create a Bogus Feed

Navigate to the Artifacts Hub and hit the + Create Feed button. Enter a bogus name for the feed. Don't hit create yet!

Create an Organization level feed in Azure Artifacts

Then open the Developer Tools in Chrome or Edge. Go to the Network tab and hit Create to create the feed:

Create an Organization level feed in Azure Artifacts

2. Capture the POST call

From the Network tab, find the POST call to the _api/Packaging/Feeds endpoint and copy it as PowerShell (or the kind of scripting language you're familiar with).

Create an Organization level feed in Azure Artifacts

3. Change the call to target the Organization level

Change the following elements:

  1. Remove the Project GUID from the POST URL ant the Path header:
  2. Remove the Project element from the payload
  3. Change the temporary-bogus-name to the desired value
  4. Add -UseBasicParsing if, like me, you don't have Internet Explorer installed
Create an Organization level feed in Azure Artifacts
Create an Organization level feed in Azure Artifacts
Don't copy my values verbatim. The Identity Descriptors are specific to you account and you need a valid Bearer Token.

Now run the code and: there's your feed!

Note: this example doesn't change the authentication headers, the Bearer Token will expire, breaking the code. You can replace then Bearer Token with a base64 encoded PAT string.

Rename your master branch to something better

$
0
0
Rename your master branch to something better

I just went through my GitHub repositories to rid them of the master branches. It was a relatively simple process that took me about an hour or 2 for all my repositories.

Scott Hanselman explains the base process. It's a simple set of steps to create a new branch with a new name (I chose main), switching the default branch and deleting the old master.

Easily rename your Git default branch from master to main
The Internet Engineering Task Force (IETF) points out that ’Master-slave is an oppressive metaphor that will and should never become fully detached from ... ...
Rename your master branch to something better

Some people seem to object to this. Arguing that master is not from master/slave, but from the concept of a 'manufacturing master' or 'master copy'. This shows it isn't the case:

Re: Replacing “master” reference in git branch names (was Re: Proposal:
Rename your master branch to something better

I bit the bullet, but ran into a few extra things to repair in my Azure Pipelines. You may as well if you have Continuous Integration and/or continuous Deployment enabled.

Default branch in Azure Pipeline builds

Rename your master branch to something better

Trigger branch filters

Rename your master branch to something better

Trigger branch in YAML files

Rename your master branch to something better

Custom conditions

Rename your master branch to something better

Artifact branch triggers

Rename your master branch to something better

Artifact branch filters

Rename your master branch to something better

Remove Branch Protection

Rename your master branch to something better

And while I'm not using these on my extensions, in azure Repo's you may also need to check out:

In this case things can become tricky, as policies and permissions can be set with wildcards, in which case you may need to resort to the CLI to fix them.

Knowing where to look is half the work.

Rename your master branch in Azure Repos

$
0
0
Rename your master branch in Azure Repos

Renaming your master branch in Azure Repos could be as simple as a few clicks. But if you have complex policies or permissions in place, it may be a little more work.

To rename your master branch you have to create a new branch and then delete the old one:

  1. Use the context menu to create a new + new branch from master.
Rename your master branch in Azure Repos

2. Chose a better name for your branch. main will do:

Rename your master branch in Azure Repos

3. Set the new main branch as your new default branch:

Rename your master branch in Azure Repos

4. And finally, delete your old master branch.

Rename your master branch in Azure Repos

If you're also using Azure Pipelines (which you should if you're also using Azure Repos), you may need to fix a few other things as well, they're essentially the same as the ones you'll need to fix when using Azure Pipelines together with GitHub.

Viewing all 216 articles
Browse latest View live