UUID vs GUID – Really The Same Thing?

I’ve now had some projects in which one question seems to come up repeatedly, and that is, if UUID and GUID are the same. The issue has new relevance as in many systems based on microservice architectures, an UUID or GUID is used as global identifier of a data record, object instance, or similar entities.

Let’s start with an UUID: it’s a unique identifier with 128 bit length (16 bytes). It is structured as defined in the standard IETF RFC4122. A GUID is also a unique identifier with 128 bit length and has historically a strong coupling to Microsoft standards.

Microsoft states in its documentation that a GUID is specified according to RFC4122 and the term might be used interchangeable with UUID. Therefore based on the Microsoft documentation, it can be stated that all UUIDs are GUIDs. Is it also the other way around?

In RFC4122 variants of UUIDs exist (chapter 4.1.1), as the concept of 128 bit identifiers was used before the specification was created and the specification tried to include those legacy definitions. The four variants defined by RFC4122 are:

  1. Reserved, Network Computing System backward compatibility.
  2. Variant specified according to RFC 4122, which contains 5 sub-variants (also called versions).
  3. Reserved, Microsoft Corporation backward compatibility.
  4. Reserved for future definition.

As legacy 128 bit identifiers are included in RFC4122, e. g. Variant 3 are Microsoft COM identifiers (who remembers the lucky times of COM programming 😉), it can be stated that GUIDs are also UUIDs. However not all GUIDs are variant 2 UUIDs.

There’s always a BUT. The standard ITU-T X.667 ISO/IEC 9834-8:2004 (‘Information technology – Open Systems Interconnection – Procedures for the operation of OSI Registration Authorities: Generation and registration of Universally Unique Identifiers (UUIDs) and their use as ASN.1 Object Identifier components’) states in 11.2 that UUID shall conform the condition ‘All UUIDs conforming to this Recommendation | International Standard shall have variant bits with bit 7 of octet 7 set to 1 and bit 6 of octet 7 set to 0.’ which correspond ONLY to variant 2. Therefore only variant 2 GUIDs would be UUIDs compatible with RFC4122 and IEC9834.

With the knowledge on those standards let’s check, if an UUID and GUID are the same within the Java VM and the .NET runtime, as this is a typical combination in standard software runtime environments in most companies. Are they both variant 2 types of 128 bit identifiers in the same format?

For the test, I create an UUID using Java 22. The Java UUID class has methods to read the variant and version directly.

UUID uuid = UUID.randomUUID();
int variant = uuid.variant();
int version = uuid.version();

The code returns variant 2 with version 4. The binary representation is encoded in big endian format.

Now I am creating a GUID using C# / .NET 8. .NET Guid class offers no property or method to extract the variant and version directly, so I am just using the ToString() representation to extract the variant.

using System;
					
Guid guid = Guid.NewGuid();
Console.WriteLine(guid.ToString().Substring(14, 1));

The code gives me variant 4. The binary representation is encoded in little endian format. Now let’s emphasize that variant 4 is reserved for future definition in RFC4122, which might or might not be compatible with an UUID.

Now back to the initial question: is an UUID and GUID the same? I think we can summarize it as theoretically yes, but practically no. It is very seldom to have an clean tech stack in an enterprise environment and interoperability is always an issue. Especially the usage of 128 bit identifiers in implementations with cross-cutting concerns is error prone.

My blog posts focus on C# and as lazy C# developer, I prefer to do nothing in my .NET environment and prefer to have my colleagues from the Java world to create deserializable unique identifiers for my .NET environment. So here’s a simple class to create valid, interoperable GUIDs working with the .NET BinaryFormatter class. Yes, you can to the byte conversion more elegant in Java: but this way it’s easier to see, what happens under the hood 🔍.

package mytests.uuid

import java.util.UUID;

public class Guid
{
    public byte[] randomGuid() 
    {
        UUID uuid = UUID.randomUUID();
        return convertUuidToGuid(uuid);
    }
    
    private byte[] convertUuidToGuid(UUID uuid)
    {
        ByteBuffer bb = ByteBuffer.wrap(new byte[16]);
        bb.putLong(uuid.getMostSignificantBits());
        bb.putLong(uuid.getLeastSignificantBits());
    
        byte[] out = bb.array();
        byte swap;
    
        swap = out[0];
        out[0] = out[3];
        out[3] = swap;
    
        swap = out[1];
        out[1] = out[2];
        out[2] = swap;
    
        swap = out[4];
        out[4] = out[5];
        out[5] = swap;
    
        swap = out[6];
        out[6] = out[7];
        out[7] = swap;
    
        return out;
    }    
}

Hope this post helps Java and .NET developers to work better together. Happy coding 🚀!