We’re very excited to announce the general availability of the Google Cloud Spanner provider for Entity Framework Core, which allows your Entity Framework Core applications to take advantage of Cloud Spanner‘s scale, strong consistency, and up to 99.999% availability. In this post, we’ll cover how to get started with the provider and highlight the supported features.
Set Up the Project
The Cloud Spanner provider is compatible with Microsoft.EntityFrameworkCore 3.1. After you have set up Entity Framework Core, add the Cloud Spanner provider to the project. You can also do this by editing your csproj file as follows:
Set Up Cloud Spanner
Before you begin using Cloud Spanner:
Follow the Set Up guide to configure a Cloud Project, authentication and authorization.Then create a Cloud Spanner instance and database following the Quickstart using the Cloud Console.
Setup a new database
If you don’t have an existing database, you may use the following example (also available on GitHub) to create a new model, populate it with data, then query the database. See Migrating an Existing Database below if you have an existing database.
Data model
We will use the following data model, created using the Cloud Console for simplicity.
Create a model
Data is accessed as a model in Entity Framework Core, which contains the entities, the context representing the database context and configuration for the entities.
In this example model, we have three Entities, representing a Singer, an Album and a Track.
On configuring the model, we use two different approaches to defining relationships between entities:
Album references Singer using a foreign key constraint, by including a Singer in the Album entity. This ensures that each Album references an existing Singer record, and that a Singer cannot be deleted without also deleting all Albums of that Singer.Track references Album by being interleaved in the parent entity Album, and is configured through OnModelCreating() with a call to InterleaveInParent(). This ensures that all Track records are stored physically together with the parent Album, which makes accessing them together more efficient.
Insert data
Data can be inserted into the database by first creating an instance of the database context, adding the new entities to the DbSet defined in the model, and finally saving the changes on the context.
The provided connection string must be in the format of Data Source=projects/<my-project>/instances/<my-instance>/databases/<my-database>.
Query data
You may query for a single entity as follows:
You can also use LINQ to query the data as follows:
Migrate an existing database
The Cloud Spanner Entity Framework Core provider supports database migrations. Follow this example to generate the data model using Migrations with the data model being the source of truth. You can also let Entity Framework Core generate code from an existing database using Reverse Engineering. Take a look at Managing Schemas for further details.
Features
Transaction support
By default the provider applies all changes in a single call to SaveChanges in a transaction. If you want to group multiple SaveChanges in a single transaction, you can manually control the read/write transactions following this example.
If you need to execute multiple consistent reads and no write operations, it is preferable to use a read-only transaction as shown in this example.
Entity Framework Core feature support
Entity Framework Core supports concurrency handling using concurrency tokens, and this example shows how to use this feature with the Cloud Spanner provider.
Cloud Spanner feature support
Besides interleaved tables mentioned above, the provider also supports the following Cloud Spanner features.
Commit timestamps
Commit timestamp columns can be configured during model creation using the UpdateCommitTimestamp annotation as shown in the sample DbContext. The commit timestamps can be read after an insert and/or an update, based on the configured annotation, as shown in this example.
Mutations
Depending on the transaction type, the provider automatically chooses between mutations and DML for executing updates.
An application can also manually configure a DbContext to only use mutations or only use DML statements for all updates. This exampleshows how to use mutations for all updates. However, note the following caveats when choosing these options:
Using only Mutations will speed up the execution of large batches of inserts/updates/deletes, but it also doesn’t allow a transaction to read its own writes during a manual transaction.
Using only DML will reduce the execution speed of large batches of inserts/updates/deletes that are executed as implicit transactions.
Query Hints
Cloud Spanner supports various statement hints and table hints, which can be configured in the provider by using a Command Interceptor. This example shows how to configure a command interceptor in the DbContext to set a table hint.
Stale reads
Cloud Spanner provides two read types. By default all read-only transactions will default to performing strong reads. You can opt into performing a stale read when querying data by using an explicit timestamp bound as shown in this example.
Generated columns
Cloud Spanner supports generated columns, which can be configured in the provider using the ValueGeneratedOnAddOrUpdate annotation in the model. This example shows how a generated column can be read after an entity is saved.
Limitations
The provider has some limitations on generating values for primary keys due to Cloud Spanner not supporting sequences, identity columns, or other value generators in the database that will generate a unique value that could be used as a primary key value. The best option is to use a client side Guid generator for a primary key if your table does not contain a natural primary key.
Getting involved
The Cloud Spanner Entity Framework Core provider is an open-source project on GitHub and we welcome contributions in the form of feedback or pull requests.
We would like to thank Knut Olav Løite and Lalji Kanjareeya for their work on this integration, and Ben Wulfe for their earlier work on the project.
Cloud BlogRead More