In the context of firebase, Cloud Functions is a serverless framework that will execute the deployed pieces of code in response to different events triggered by Firebase or external events like HTTPS requests.
You can use this link to learn more about cloud functions like how to add it to an existing Firebase project, how to trigger a cloud function and how to deploy etc.
The purpose of this article would be to give you an idea about a proper architecture that you would need to follow in order to make the code more readable and manageable. Node-based backend architectures are very rare to find on the internet, so I would think this might be helpful for Node backend developers as well.
I will not cover writing test cases for the cloud functions in this article. I will talk about the basic folder structure but an in-depth article on writing test cases is a topic for another article.
We have 2 Firestore collections named Children (Child class) and Ages (Age class). Notice that we have given plural names as the collection names and singular names for their corresponding classes.
Using this tutorial we will try to solve this small problem. A child can create a document in the Children collection. Then a cloud function will trigger automatically and calculate the age and write a document in the Ages collection.
You can refer to the documentation that I mentioned above to learn how to create a cloud function project. Make sure to select the language as typescript and to turn on lint. You will find only the src folder and node_modules folder in the beginning. The lib folder will be created automatically after you compile the typescript code. You have to create the test folder when you are writing test cases.
The above image shows the basic folder structure of the project which is inside the src folder.
FolderDescriptiondaoFetching, writing to the Firestore happens through the files in hereinterfacesThe main interfaces, abstract classes that will be using in the projectmapperConvert Firestore data models to our internal models and vice versamodelsInternal data models (entities)servicesMain business logicutilUtility related code
I have put commonly used interfaces and abstract classes here.
This interface is used to implement the functions that will run on cloud function triggers. In our case, our cloud function should trigger when a new document is added to the Firestore. The code that should run after that trigger is implemented using this interface.
The above code is the function prototype for onCreate trigger. If you have more types of triggers that are addressed in your code, you can declare them all here. I have added some more functions to the Github repository of this project.
Every model is created by extending this abstract class. This includes some basic information that is needed by every model in the system.
More information about this will be provided in the Models section. I will explain the use of these variables there.
This interface is used to convert Firestore data structures to internal models and vice versa.
It is better if you can have an interface for the data access layer as well. Then there will be an abstraction in there as well. The data access layer will implement this interface for each model that needs to be saved in Firestore. But for this case, I’m going to ignore that abstraction.
Instances of classes in this folder are used to represent firebase documents. Basically, these classes should not have any dependency. But in my implementation, those classes depend on some Firestore DocumentReference classes. This is because it helps when converting these models back to the data structures that can be stored in Firestore. It is like keys(ids) in MySQL. But if you don’t like that dependency, you can store the path of the document instead. You can implement this behavior by changing the ref field in the DBModel and the mapper implementation that is related to this model.
As I mentioned above ref and id fields are used when converting the models to Firestore data structures. The reference of the document that is storing the data of this model is in the ref field. The id field is useful when creating the document. By default, the id field is equal to the documentId of the ref if ref is not null. Making use of this id field will be discussed in the DAO section.
You can see the use of these variables in the mapper classes. You might see that declaring these variables here will add another level of dependency for Firestore in these model classes. You can avoid that by declaring these variables in the specific mappers. But I prefer to do them here.
As I said above, the functionality of mappers is to convert Firestore data structures to internal models when reading data and do the vice versa when writing the data. For the two models we have, we should have two corresponding mappers.
When accessing the Firestore data structures, notice that I have used the static variables that were declared in the model classes instead of using string literals. This way you can minimize the chance of making a mistake.
The util folder is used to store utility data of the system. Currently, there is only a single file there named dBUtil.ts. That file is used to store the names of the Firestore collections that are used by this system.
Notice the naming convention that we used in Firestore collections. A collection name is always plural and written in camel case. But when accessing the collection, we use the singular term of the collection’s name as the variable to store the name of the collection.
These classes are used to fetch or store data from/to Firestore. There are two things that these classes are required to do.
In thiscase, we only need a DAO to store documents in the Ages collection.
A model that is required to store in Firestore does not have a document reference. Hence the ref field is null. But the documentId of the newly created document is determined by the id field of the model. If the id is null, the document ID will be generated randomly. But if a string was given as the id, it will be used to create the document and it will be the document’s ID.
After the document was created, the ref field of the model has to be updated by the DAO. This will make sure that this newly created document is referenced by the model after the creation of the document and in other places in the project you will need not to be worried about it. If you need to update some things in the same model and update the document later in the code, you have nothing to worry about creating duplicate documents in the database.
I will add some more functionality to the DAO like update document, in the github repository of this project.
In a separate article I will talk about the Repository design pattern that can be used instead of DAO to avoid the mess inside the DAO (Most probably bloaters in the code). Also it will add another level of abstraction to the code.
This is the core of the application. All the business logic happens inside this package. This package is broken down into sub packages, one sub package per one cloud function. Each sub package has a Handler file to catch the cloud function trigger. The subpackage also includes a set of Service files. You can create multiple service files to implement different logics that need be done in the cloud function. Handler will call them when needed.
The functionality of a handler is to,
The services are the brain of the application. Processing, doing logical operations and saving data if needed is done through services. It is better to use different classes for each functionality although I did data processing and updating Firestore in the same class.
If you are writing interfaces to abstract the functionality of these service classes and DAO, you might need to inject the implementation of those interfaces to handlers and services. You can use the service locator pattern or inject the dependencies as parameters in the constructor. Personally I prefer to inject them in the constructor.
There might be cases where you will need the same functionality of the services across different cloud functions. As we are writing the packages for different cloud functions separately, it might feel a little weird to share the services across the packages. If you need that level of separation between packages, you can duplicate the code in those packages. But I prefer to call the appropriate function in the other package to do the thing needed instead of duplicating the code. Duplicating code is a very bad smell to have in a project. If you have an interface to abstract the service classes, make sure to add the shared functions to the interface class.
Finally, you have to connect the handler to the index file. The appropriate function of the handler has to be executed via this index file.
This concludes our way of writing Firebase cloud functions. You can access the codebase of this small project in my Github. Clone it and play with it.
See you in another article. Bye!